Data Management
Data Management
Pega® Platform includes powerful and flexible facilities for defining data structures and managing data.
You can define individual fields, arrays and repeating groups, multilevel structures, and a variety of
other data structures. To support the exchange of data with other systems, your application can convert
a Pega Platform data structure into a fixed record layout, an XML document, a message, or a relational
database row.
In addition to a name such as DateofBirth, most property rules belong to a work type, which is a
container that defines the scope of the name. The work type is part of the full property name. For
example, the full name LoanApplication.DateofBirth, identifies a property DateofBirth, where
LoanApplication is the work type, and the period is a separator.
To reduce development effort, Pega Platform includes thousands of property rules, known as standard
properties, which you can use in your application. You need to create additional property rules only for
those properties that are unique to your business situation and environment.
Most properties identify a single value such as a date, time, number, or text string. Other properties
define arrays of scalar values, or data structures that contain a set of property values, known as pages.
Pages can contain other properties, including other pages, or arrays of other pages.
For example, a data structure that describes a family insurance policy can include an embedded page
for each member of the family, with individual properties for the person's first name, date of birth, and
so on. The "dot notation" that is used in many software development environments allows you to
specify the full details of any property, even one that is embedded deeply within pages that are
contained in other pages that are also contained in other pages. If the entire structure is named Policy,
and the information about each family member is an array of pages named InsuredPerson, then the
date of birth property for the third family member has this full name:
Policy.InsuredPerson(3).DateofBirth
The mode of these properties is Single Value, also called scalar. Properties that hold multiple values can
be arrays or other structures. For simple arrays, the property modes are Value List and Value Group.
For Single Value, Value List, and Value Group property types, a type is also defined. The following
property types are the most popular:
Similarly, dates, times, and decimal values each have one internal format but many possible
presentations on forms and reports. When you enter a value, it is converted automatically to the
internal format for efficiency in storage and processing. When a value is presented as output, the
internal form is converted to the appropriate output format.
When you log in, your clipboard is initially empty and is assembled with initial data from several
sources. When you log out, your clipboard is erased and the memory is released so that the memory
becomes available to others.
Most interactions update the clipboard, but it is not ordinarily visible. The Clipboard tool is useful when
you build and test rules. The Clipboard tool shows the structure of pages in the left panel, and the
detailed contents (for Single Value properties) in the right panel.
Clipboard
In the preceding example, the left panel presents the user's entire clipboard, as a tree structure of
pages, including pages within pages. The right panel shows the property names and current values of
the page that is highlighted in the left panel.
Property form
The value of a property can change from moment to moment, and from user to user, but you determine
the other permanent characteristics of a property in a rule form. For example, this Property rule form
identifies the property type (Decimal) of the property Newb-Newbies-Work.LineItemTotal, and indicates
that values of the property typically appear as a currency amount, such as $1,234.56.
Property form
As development of a business application progresses, the development team might discover that new
properties are needed beyond those already available. To create a property, you define it on the
Property rule form.
The following data transform sets the value of three properties to 0, and sets a text property to "C-".
Terminology
Aggregate property – A general term for any property that has multiple values, such as a simple
array (property modes Value List and Value Group), a structure (property mode Page) or a
repeating structure (property modes Page List and Page Group).
Class – A hierarchy or tree structure of named "containers" that define which properties can be on
a page.
Clipboard – An internal memory structure for a user, holding the pages and property values in
current use. The clipboard is established when you log in, updated as you work, and disappears
when you log out.
Data transform – A rule that assigns values to one or more properties, often used only for initial
values that may change later. The value can be a constant (such as 7 or "Hello") or an expression
that involves other property values.
Embedded page – A page that is not a top-level page; a page that defines a substructure of a
higher-level page.
Group – A repeating structure with elements identified by a text name, such as a state code. For
example, the Value Group property StateCapitol may contain elements StateCapitol (MA) for
Boston, Massachusetts. There is no defined order for members of a group.
List – A repeating structure with elements identified by numeric subscripts starting at 1. Value Lists
correspond to simple arrays. Page Lists are an array of pages.
Page – A collection of property names and values where all the properties belong to a common
class (or to an ancestor of that class.)
Property
A rule that defines the name, class, property mode, and other characteristics of a property.
Informally, the name and current value of a property rule, a name-value pair.
Property mode – An attribute of each property that identifies whether it holds one value (Single
Value mode) or an array of values (Value List and Value Group modes) or a structure (Page, Page
List, and Page Group modes).
Property type – For Single Value properties and simple arrays (property mode Value List or Value
Group), the type identifies how to validate and interpret the value. The most popular types are
Text, Date, DateTime, Integer, and Decimal.
Single Value – The property mode for scalar properties, corresponding to a single text, date,
numeric, or other value. The opposite of aggregate properties.
Top-level page – A page on the clipboard that is not part of any other page.
The import process is extensible, and you can do the following tasks:
Override the pyLoadCustomImportPurposes data transform. For example, you can add an Update
locations purpose as shown in the following figure:
Adding a data import purpose
The new purpose is displayed in the Purpose list of the Upload file step in the data import process:
In the Map fields step, you can map the field in your data type that acts as a unique identifier with a
field in the imported .csv file. The import process passes the new purpose and list of values for the
uniquely mapped field in the .csv file to the activity that defines the logic for data import
(pyCustomPostProcessing).
The Delete purpose is not displayed in the Purpose list of the Upload file step in the data import
process:
Data import purpose hidden at run time
For example, if you set Update locations as the default import purpose, it is displayed as the default
selection in the Purpose list of the Upload file step in the data import process. You can save time during
data import if you update locations for your data type on a regular basis.
When you set the value for the property to true, the system does not allow you to select an import
purpose from the Purpose list of the Upload file step in the data import process. The default value in this
list is Add or update or the value that you set for the pyDefaultImportPurpose property. For example, to
allow only location updates for your data type, you can set the value for the pyDefaultImportPurpose
property to UpdateLocations and the value for the pyDisableImportPurpose property to true.
When you set the value for the property to true, the key field is not validated during the data import
process.
The New location field is displayed in the Import records step of the data import process. You can enter
a value for this field (for example, New York) at run time:
The activity that defines the logic for data import can access the work object that contains this field. If
you want to collect new information, add fields to the work object class, and not to some other top-level
page.
You must define the logic for data import if you have added a new purpose.
The import wizard does not call the pyCustomPostProcessing activity for the Delete import
purpose.
You cannot use custom post-processing for an add-only import of a class that does not have a
BLOB and that uses autogenerated keys.
You can use these variables as input parameters for the activity.
Use a loop to iterate through the list of values for the uniquely mapped field in the imported .csv
file, as shown in the following figure:
The import wizard calls this activity when you click Import in the Import records step of the data import
process.
For example, if you override this activity to update all the locations of your Employee data type to a
common location such as New York, the data type is updated as shown in the following figure:
In the Map fields step of the data import process, the system sets the default value of the Match
existing records by field to the value that you used in the activity (Staff ID) and marks the mapped field
in your data type (Employee ID) as a record identifier:
You can see the data import progress on the D_pxDataImportProgress data page.
Run the pxImportRecordsAPI activity to import data outside the Data Designer with the following
parameters:
dataImportPage – Page that contains all the details required to upload data records for
import. This page has the following information:
pyImportPurpose – Data import purpose
pyDataImportClass – Class for which records are imported
pyClassName – Class for which records are imported (same as pyDataImportClass)
pyDataImportFilePath – Location of the .csv file from which records are imported
pyFieldMappingsForAPI – Page List property of Embed-FieldMapping class that holds the
mapping between the .csv file and the class property
pyListSeparator – List separator that is used to split the records in the .csv file
pyLocale – The locale that is used to show the messages
pyID – Unique ID of the data import process
isAsynchronous – Boolean value that identifies whether the data import process is
asynchronous or synchronous
processID – Unique ID that allows you to check the data import progress details. If you want to
pass this ID, its value should appear in the dataImportPage as pyID'; otherwise the
pxImportRecordsAPI activity creates a new ID.
errorFile – Name of the .csv file containing the erroneous records that were encountered while
processing data for import
The pxImportRecordsAPI activity for data import outside the Data Designer
To see the progress of the data import, you can call the D_pxDataImportProgress data page by passing
the value of processID for your data import.
You can also use the pxImportRecordsAPI activity when you import data in the Data Designer. For more
information, see Processing records and data before and after import.
1. Create a section that includes the Record Editor gadget that can be added to your application.
2. Create a report definition to specify which columns are displayed in the Record Editor.
3. Configure the Record Editor gadget to specify the functionality to include, for example, full-text
search and the ability to add, delete, import, and export data.
Limitations
The Record Editor gadget has the following limitations:
The Record Editor gadget cannot be used in the New, Review, and Perform harnesses.
A report definition that uses summary functions is not supported.
When you add or update calculated properties, the properties that use functions in the report and
the joined class properties are shown as read-only.
Reports with parameters are not supported.
9. Click Submit.
10. Click Save.
For more information about creating reports, see Creating reports in Designer Studio.
For more information about report definitions, see Report Definition rule form.
4. In the Data Source Class Name field, select the class name (pyClassName) of the data type that
has the records that you want to display.
5. In the Report definition Name field, select the report definition that you want to use to display the
data.
6. Optional for screens that contain only one view: In the Report Page Name field, enter the name of
the view that you want to use.
You can create a report page and populate the data on the page before rendering the gadget
during run time, or you can populate the data on the page at run time.
7. Optional: Select the options that you want to make available to users in the application.
Show import and export – When selected, displays the import and export buttons. When a
user clicks Export, the current view is downloaded as a .csv file. When a user clicks Import,
the user can choose a .csv file to import and map the fields in the .csv file to the fields in the
data type; only the mapped fields are imported. For more information about importing, see
Importing data for a data type. For more information about exporting, see Exporting data
from your application.
Show search – When selected, displays the Search field. The Search field filters the results to
display only those records that contain the search text in any field in the report definition.
Use full text search – When selected, allows full-text search. Full-text searches are performed
against the global search index instead of against the database. For more information, see
Full-text search. and Enabling and disabling classes for search indexing.
Hide the add option – When selected, the add option is not displayed. If the add option is
enabled, only non-calculated property values for the current class can be added or edited.
Calculated property columns or other class property values will be in read-only mode. This
option is always hidden for Work- records, regardless of how this parameter is set.
Hide the delete option – When selected, the delete option is not displayed. When a user clicks
Delete, the current class record is deleted from the database. If the Report Definition contains
joined classes, the class record is not deleted. This option is always hidden for Work- records,
regardless of how this parameter is set.
8. Click Submit.
The CRM data types are: Contact, Address, Organization , Role, ContactOrgRel, OrgOrgRel, and
ContactContactRel. These data types and related rules are part of the Pega-SharedData ruleset. The
Pega-SharedData ruleset is included in PegaRules.
All tables in the CustomerData schema are non-Pega formatted; that is, they do not contain any Pega
internal columns such as a BLOB column, pzinskey, or pyObjClass. Any new tables that you add to
CustomerData are added as non-Pega formatted tables. Pega formatted tables cannot be added to
CustomerData.
The tables and their relationships to one another are shown in the following graphic:
CRM data model
Date time formats supported by the Data Import wizard for
data types
When data for a data type is imported, the date time formats in the import file must be a supported
format or the import fails. The Data Import wizard supports several categories of date time formats to
which it attempts to match the incoming date time format. When a matching date time format is found,
the Data Import wizard stops looking for a match.
If a custom date time format is entered in the Data Import wizard, the Data Import wizard uses it and
does not look for a match in the other date time categories.
The supported date time formats and the order against which the Data Import wizard attempts to find a
match are:
1. Extension point – Data page or data transform that uses the Pega-supplied
D_pyCustomDateFormats data page as its source. The extension point supports Pega-supplied
formats and simple date formats. The record editor class is Data-Metadata-CustomDateFormat.
2. ISO-8601 universal formats – These formats are not locale-specific.
3. Other supported format – yyyy-MM-dd HH:mm:ss
4. Locale-specific – The default Microsoft Excel date time formats for the user's locale for locales
supported by Pega® Platform. For formats not supported by Pega Platform, the default Microsoft
Excel format for the United States locale is used. For a list of supported locales and their formats,
see Locale settings - date time formats. For information about Microsoft Excel date time formats,
refer to the Microsoft Excel documentation.
5. Pega ISO format (Pega default format) – yyyyMMdd'T'HHmmss.SSS 'GMT'
A property defines the format and visual presentation of data in your application. Each property can be
associated with a value and has a mode and type. The property mode, such as Single Value or Page
List, defines the structure that is used to store the property value. The property type, such as Text or
Date, defines the format and allowable characters in the property value.
The values of Page List and Page Group properties are stored in the BLOB column, that is, these values
are not columns in the database. You can use a Declare Index rule to expose Page List and Page Group
properties so that you can report on them. For more information, see About Declare Index rules.
Ad hoc Page Lists are also created and filled when you do a database search by using the Obj-List
method. Consider work objects entered by an operator. These work objects are not explicitly defined in
the Operator table. You can write a query by using the Obj-List method to generate a list of records
from the database.
The following table shows the current page and the iteration counter for accessing values in method
parameters and Java steps.
In addition to performance considerations, you might not be able to expose properties with long values
as database columns, depending on your database software's aggregate size limit for exposed columns.
If a property exceeds your database's aggregate size limit, making the property too long to expose as a
database column, consider dividing the property and storing it as a number of shorter properties.
If your property is exposed, set the property value size in the Max length field to equal the maximum
size of the database column to ensure that your data fits into the column.
Data Designer
You can review, manage, and update data types in your application by using the Data Designer. When
you select a data type in the Data Explorer, the data type opens in the Data Designer.
Data Designer shows the data model for your data type, usage throughout your application, the physical
systems of record that store data of this type, the records editor for browsing and editing records of this
type, and the data visualizer that helps you to understand how this data type is related to the rest of the
case and data types in your application.
If you update or override a property rule of mode Single Value, Value List or Value Group, you can change the
property rule Type to one that is more narrowly defined. This does not cause any runtime conversions of
property values.
TO
Date Time Date Time of Day Integer Decimal Double True or False Text
Date Time X X
Date X
FROM Time of Day X
Integer X
Decimal X
Double X X X
True or False X
Text X X X X X X X
For example, if the original property has a type of DateTime, you cannot override it with a new property
that has a type of Double. If you try, an error message similar to the following appears:
In this case, you can override the DateTime property in a higher RuleSet version with a type of Date or
TimeOfDay. Similarly, you can “specialize” a property of type Text.
It is best practice to make such overrides when no work objects or other saved instances have a value
for the property.
Note: Overriding the property does not cause Process Commander to convert any existing values of the
property. Conversion can cause errors, as described in the following section.
Suggested Approach
Two methods can be used to change the property type. It is recommended that you only override
properties that are not yet saved in work objects, to avoid validation issues and other processing
errors.
.
Method 1: Delete and recreate the property using the new property type
In some cases you can simply delete the property rule in every system (such as application, testing, and
production) and then recreate the property using the correct type.
For example, assume that you created a property named Prop1 with the type Integer in the 01-01-01
RuleSet Version, locked the version and continued development in 01-01-02. You now realize you need
to change the type of Prop1 from Integer to Text. If you change the property and attempt to resave into 01-
01-02 with the new type a general error occurs —
In this case, you can delete Prop1 and recreate it using Text as its property type.
Note: This method may not work in some cases. For instance, assume you change a property called
Prop2 (Value mode) from Text to Integer. Also assume that a work object contains alphabetic characters
using Prop2 (as type Text). Because Prop2 type is now defined as an Integer, an error will likely occur
during data dictionary validation, which occurs when:
You can create a new property with a different name and use it going forward. Copy the old property
with a availability setting of Blocked in a new RuleSet version to prevent any later further use of it.
Note: Use this method only in development systems; it is not advisable to make such changes use in
production environments.
Validation
Error message that is displayed How to troubleshoot
check
You cannot change the type of {FieldLabel
Contact your system administrator
(FieldName)} field because current production level is
and request a change to the
Production {CurrentProductionLevel} and changing the type of
specified Dynamic System Setting,
level check field is not allowed beyond production level
and then try again to change the
{MaxAllowedProductionLevel}. You can check setting
type.
'fieldtypechange/productionlevel'.
Multiple Define a new property with the
You cannot change the type of {FieldLabel
property correct type, and phase out the
(FieldName)} field because it has multiple versions.
definitions use of the old property.
Data Explorer
Designer Studio provides explorers that you can use to quickly view and access important components
and information about your application. From the Data Explorer, you can view the data types for your
current application and create and work with data types and their corresponding data pages. To access
the Data Explorer, in the Explorer panel, click Data.
You can manage the data object types available to your application, adding new data object types or
importing existing ones from other applications; and manage the properties and data pages associated
with each data object type.
You can review where your data pages are in use in your applications, to help you evaluate the impact
of changes to the data page you might be considering.
A data object type is a class that simplifies organizing the properties, data pages, transforms, and
other elements your application needs to get the right data at the right time.
A data page is the hub of data management for your application. Data pages dynamically provide the
data the application needs in a given situation, and make sure the data is up to date.
The Data Explorer is one of the explorers in the left panel in the Designer Studio. Click Data to display
and use the Data Explorer at any time.
1. The data object types appears in the Data Types list on the Cases and Data tab of the application
rule form. (You can also associate a data object type with the application in the explorer.)
2. A user adds the data object type by the process described under General controls, below.
3. The system automatically associates the data object type with the application. This happens when:
1. A class is created which derives from Data-.
2. A page or page list property in the application whose page class derives from Data- is saved.
3. A property whose page class derives from Data- is referred to in another rule in the
application.
Associating a data object type with an application adds a record to a link table. It does not noticeably
increase the size of the application, and no data object types or data pages are copied into the
application. However, associating data object types with the application provides easy access to the
data definitions, and their data pages most likely to be used by, and useful to, the application, without
making the Data Explorer list unmanageably long.
General controls
Click the down-arrow at the top of the Data list to display a menu of options.
You can add data object types to the list or remove existing ones. Select this option to display the
Add/Remove Data Object Types form.
Click Application to display a list of available applications; check the checkbox for each application
whose data object types you want to include in the list. Click Search to find a data object types by
name, and enter a text string in the field that appears.
The list shows the "exposed" data object types by default. "Exposed" data objects are those associated
with one or more applications in your application stack.
Check the Show unexposed types checkbox to also display "unexposed" data object types, ones that
are in your application’s RuleSet stack, but are not explicitly associated with an application in your
stack.
Uncheck the checkboxes for the data object Types that you do not want to include in the list.
To create a new data object type, click +CREATE NEW. In the wizard that appears, you can quickly
name the data object type. Click the triangle to display fields where you can specify its ID, its directed
and pattern inheritance, and in which RuleSet and version to save it.
The second screen of the wizard lets you quickly specify properties for the data object type.
Click this option to include in the list of data object types case types defined for your
application.
Click this option to refresh the display, to make sure you are seeing the most current information.
+Create
Click this option to bring up a menu of rule types to create. By default, the new item will be in the
selected data object type.
Click in the Search box, enter part of the name of the data object type or data page you want to find,
and click Enter. The display shows the results of your search.
Hover over the display name of the data object type to see its rule name
An entry under the data object type's name shows how many data pages it has, as well as the number
of undefined pages. The second number shows you the number of pages that should be converted into
data pages to improve application performance and code reuse. Click the second number to display the
undefined-pages landing page, which lists the rules that reference the undefined pages.
In the expanded list, a number to the right of each data page shows how many places in the application
use that data page. Hover over the data page title to see its rule name.
Click the data object type, or one of its data pages, to display your selection in a tab to the right of the
Data Explorer.
Click the menu icon to the right of a data object entry to see a menu of common actions:
Open the data object type in a tab to the right of the explorer.
Rename the data object type, if it is not in a locked RuleSet.
Create a new data object type.
Create a new data page for the selected data object type.
View a list of the data object type's properties.You can navigate from the list to view or work on a
given property, or use the + icon to add a property.
Quickly define multiple properties for the selected data object type. A popup form allows you to
create multiple properties in one process.
Remove the data object type from the Data Explorer. The data object type is not deleted; however,
it is not displayed in the explorer's list.
The Data Explorer is always available to help you define data sources for your application and ensure
they are available for use.
1. Click Designer Studio > Data Model > View external data entities to access the landing page, as
shown in the following figure:
The view is dynamically generated, so it always contains the latest information. A data entity
consists of a data type (logical or virtualized layer) and one or more source systems (physical
layer).
2. Expand each data entity to reveal the physical sources that are used to obtain data, as shown in
the following figure:
3. The SOURCED BY section shows the technical details of how the application connects to an
interface that is provided by the source system. The details include the protocol, the inputs, and
how to authenticate. The type of physical source is denoted by the second column of icons in this
section. The DATA PAGE section shows the data pages that are used to retrieve data from each
physical source. These data pages are the core component of the Pega Live Data layer. The data
pages make up the logical data layer and allow an application to access data from anywhere. The
icon next to each data page indicates its structure. For more information about the icons in these
two sections, see External Data Entities landing page.
4. Click the names of the records in the SOURCED BY and DATA PAGE sections to view the information
about how each record is configured.
You can use a data type with a simulated source (a source system that is marked with a yellow triangle)
during development because data types in the Pega 7 Platform are virtualized data views. You can
continue developing the rest of the application before the integration layer is ready. You can build the
integration layer simultaneously and use it when it becomes available.
To connect a data type to a new source system, rename the sample source system that the data type is
connected to and then replace the simulated data sources with production sources. You can repeat the
tasks for each entity on the landing page with a yellow icon and change it to green. After all the source
systems are green, the data layer of your application is ready for production.
1. On the External Data entities landing page, expand a data type that is connected to a sample
source system that needs to be replaced, as shown in the following figure:
In this example, multiple simulated data sources connect to the sample source system.
2. Open a data page and find the simulated source in the list, as shown in the following figure:
3. Update the System name field with the name of the source system to use in production, as shown
in the following figure:
4. Repeat steps 2 and 3 for each data source that is connected to the sample source system in the
list.
5. Refresh the External Data Entities landing page. The data type now displays the name of your
actual physical source system, as shown in the following figure. The status is still yellow because it
is still being simulated for now.
Do the following actions to replace a source system with a new source of data:
1. Open a data page and find the simulated source in the list.
2. Clear the Simulate data source check box.
3. Select the source type based on the interface that is used to connect to the new source system.
4. Enter the name of the data source. This rule is specific to the source type that captures the
configuration details required to interact with your source system. If this record does not exist yet,
create one.
5. If needed, define additional source and mapping information such as request mapping or response
mapping.
The mapping rules are a critical component for Pega Live Data. These rules facilitate the exchange
of data between the logical (virtualized) data layer and the physical source systems.
6. Repeat steps 1 to 5 for each simulated data source that you want to connect to the new source
system.
7. Refresh the External Data Entities landing page. The source system should now have the green
icon next to it.
Data pages
Data pages, (known as "declare pages" and "declarative pages" in versions before Pega® 7), provide
on-demand access to data from your business processes while insulating the business processes from
the actual integration details that are required to connect to the physical sources.
Data pages store data that the system needs to populate work item properties for calculations or for
other processes. When the system references a data page, the data page either creates an instance of
itself on the clipboard and loads the required data in it for the system to use, or responds to the
reference with an existing instance of itself.
Data pages obtain the data from external sources by connectors, from report definitions that generate
queries of the Pega® Platform database, or from other sources; and might use data transforms to make
the data fully available where it is needed.
The name of a data page starts with the prefixes D_ or Declare_ . On the clipboard, the contents of data
page instances are visible, but are read-only.
The Data Explorer in Designer Studio lists all the data pages that are available to your application. By
using the Data Explorer, you can quickly add data pages and data object types (classes).
Concepts
Page scope types for data and declare pages
Use parameters when referencing a data page to get the right data
However, there are important differences between data pages and other clipboard pages.
Clipboard location
Read-only data pages (all scopes) appear in the data pages (version 7.1) or the declare pages
(versions 5.1-6.3 SP2) area of the clipboard and not under user pages or system pages.
Editable data pages (thread and requestor scope) appear in user pages.
Edit operation – Data pages can be read-only or editable.
Read-only mode: You cannot add or remove data after the data pages are created.
Editable mode: You can modify the data after the data pages are created.
Naming convention – The names of data pages must begin with the string Declare_ (for versions
5.1-6.3 SP1) or either D_ or Declare_ (for version 7.1). Other types of pages cannot begin with
these strings.
Creation – A data page is automatically created whenever any properties on the page are
accessed, if the page does not already exist. You do not have to explicitly create these pages by
using the Page-New method or other methods. Data pages with parameters are loaded only when
mandatory parameters are provided.
Update procedure – Data pages can have an automatic refresh strategy, which ensures that
their contents are up-to-date.
Database persistence
Unlike other pages (such as work item pages), read-only data pages cannot be saved.
Editable data pages can be saved.
Passivation – When a requestor is passivated, all of that user’s information is serialized and
temporarily saved to persistent storage. If this user's clipboard contains any read-only data pages,
those pages are not saved. Instead, the system deletes these pages when it passivates the
requestor, and then re-creates them whenever they are next referenced by that requestor (after
the requestor is reactivated). Editable data pages are saved like normal pages.
Overview
Load data into a page property from a page-structure data page
Data transforms
Overview
The diagram below shows, at a high level, what happens when the system references a data page:
1. Properties can automatically reference a data page, providing parameters that the data page can
use to get the data that the object needs from the data source .
2. The data page verifies whether an instance of itself, created by an earlier call using the same
parameters, exists on the clipboard.
If the data page's refresh strategy permits, the data page responds to the request with the
data on the existing clipboard page.
Otherwise, the data page makes a call to a data source, using a request data transform to
structure the request so the data source can respond to it, if necessary.
3. The data source uses the information that the data page sends to locate and provide the data that
the application requires.
4. If necessary, a response data transform maps data to the properties that require it.
5. The data page creates an instance of itself on the clipboard to hold the mapped data, and provides
it as a response to the request.
The object can reference or copy data from a page-structure data page using the new "Data Access"
section on the General tab on the property form:
When the system references a data page, the data page checks whether an instance of itself that
satisfies the reference already exists on the clipboard.
If it does, and if the data page's refresh strategy allows it, the data page uses the existing instance
to respond to the reference.
Otherwise, it creates a fresh instance of itself on the clipboard, populates it with data relevant to
the reference, and uses it to respond to the reference.
Each reference may require parameter values. In the example above, the parameter CustomerID has an
* beside its name to indicate that it is required. Auto-populated properties do not try to load a data
page unless all required parameters have values.
To learn about automatic data access for page list properties, see Load data into a page list property.
Return to top
You have a Customer property in your Order case that you want to retrieve from an external
service.
You want the Customer data to be stored with each order so you have a permanent record of
customer information at the time the order was placed.
The data source you are using requires the customer's ID and a "level of detail" (full or minimal)
value to retrieve the data.
Properties:
Data page:
The data page D_Customer has these settings:
Structure = Page
Class = Data-Customer
Scope = Thread
Edit Mode = Read Only
Parameters = CustomerID and LevelOfDetail
What happens
1. The user or the system sets the values for .LevelOfDetail and .Customer.CustomerID.
2. The user or the system references an embedded property on .Customer. This triggers auto-
populating the customer data.
3. The system references the data page, passing the parameter values.
If an instance of the data page that corresponds to the parameter values exists on the
clipboard, and the data page's refresh strategy permits, the data page responds to the
reference with the existing instance.
Otherwise, it passes the parameters to the appropriate data source.
4. If the data source executes, it passes data to the response data transform, which maps data into
the instance of the data page.
5. The CopyCustomerSubset data transform specified on the .Customer property copies data from the
data page instance into the property. If no data transform is specified, all data from the data page
instance is copied into the property.
Return to top
Data transforms
A data transform defines how to take source data values — data that is in one format — and transform
them into data of another format (the "destination" or "target"). Using a data transform speeds
development and is easier to maintain than setting property values with an activity, because the data
transform form and actions are easier to understand than the activity form and methods, especially
when the activity includes custom Java code.
There are three main data transforms involved in data management using data pages:
For more about working with data transforms, see Introduction to data transforms.
Return to top
Overview
Load data into a page list property from different instances of a page-structure data page
Data transforms
Overview
The diagram below shows, at a high level, what happens when the system references a data page:
The object references a data page, providing parameters that the data page can use to get the
data that the object needs from the data source.
The data page verifies whether an instance of itself, created by an earlier call using the same
parameters, exists on the clipboard.
If the data page's refresh strategy permits, the data page responds to the request with the
data on the existing clipboard page.
Otherwise, the data page makes a call to a data source, using a request data transform to
structure the request so the data source can respond to it, if necessary.
The data source uses the information the data page sends to locate and provide the data the
application requires.
If necessary, a response data transform maps the data to the properties that require it.
The data page creates an instance of itself on the clipboard to hold the mapped data, and provides
it as a response to the request.
Properties can automatically reference a list-structure data page using the new "Data Access" section
on the General tab on the property form:
If it does, and if the data page's refresh strategy allows it, the data page uses the existing instance
to respond to the reference.
Otherwise, it creates a fresh instance of itself on the clipboard, populates it with data relevant to
the reference, and uses it to respond to the reference.
Each reference may require parameter values. In the example above, the parameter ProductType is
optional; a required parameter would have an * beside its name. Auto-populated properties do not try
to load a data page unless all required parameters have values.
To learn about automatic data access for page properties, see Load data into a page property.
Return to top
Initially the customer browses a product catalog that contains only the information necessary for them
to learn about the product. The product list that the customer is browsing lives outside of the case. But
as a customer adds products of interest to their shopping cart, the items are moved into the case, and
the system retrieves more detailed product information necessary for completing the order. Ordered
products must live with the case so it can maintain an exact record of the product as it was ordered.
There are two services available to this application, both of which return information for a single
product.
Properties
Select the "Copy data from a data page" option (a data transform is required) and specify
D_Products as the data page.
Set as parameters:
DetailLevel = .LevelOfDetail
ID = .ProductsID
Select the "Retrieve each page separately" option. The data page creates a new instance of itself
for each unique .Products(n) based on .Products(n).ID.
Data page
Structure = Page
Class = Data-Product
Scope = Requestor
Edit Mode = Read Only
Parameters = DetailLevel and ID
Two connector data sources, one of class Int-GetFullProductDetail, and the other of class Int-
GetMinProductDetail.
Each data source has a When condition that checks the value of param.DetailLevel.
This allows the data page create instances of itself that hold full or minimal product detail,
depending on the value of param.DetailLevel received with each reference.
One request data transform per data source to form the request so the service can use it.
The data page passes the values of the parameters DetailLevel and ID to the request data
transform, which passes the values to the service so they can be used.
One response data transform per data source, of class Data-Product, to map from the integration
to the data class properties.
What happens
1. If the user is shopping, the system sets .LevelOfDetail to return minimal product information. If the
user is ready to check out, the system sets .LevelOfDetail to return full product data.
2. The user or the system references an embedded property on .Products that triggers
autopopulation of the product list.
3. The system passes parameters to the data page.
If there exist on the clipboard instances of the data page that correspond to the parameter
values passed in, and if the data page's refresh strategy permits it, the data page returns
data from the existing instances.
Otherwise, the data page creates separate instances of itself for each page in .Products. Each
data page instance uses the same .ProductDetailLevel value.
4. Since the data page has two data sources, the When condition associated with the first source in
the list checks whether it should be used, based on param.DetailLevel. If it is not to be used, the
system uses the second data source.
5. If the data source executes and returns data, the response data transform maps the data into the
data page.
6. For each .Products(n), the system passes the .ID of the product. The data page creates a new
instance of itself for each product and copies the data from the correct instance to the
corresponding page in the page list.
Return to top
Data transforms
A data transform defines how to take source data values — data that is in one format — and transform
them into data of another format (the "destination" or "target"). Using a data transform speeds
development and is easier to maintain than setting property values with an activity, because the data
transform form and actions are easier to understand than the activity form and methods, especially
when the activity includes custom Java code.
There are three main data transforms involved in data management using data pages:
Return to top
Data pages transform the raw data received from a data source into data the application needs and can
use.
An application that uses data pages, and that passes parameter values to them to get the right data to
the right place, can build a responsive and rich structure without creating a lot of data pages, activities,
and other code. Because PRPC supports multiple instances of the same data page, you can use the
same design-time definition for multiple contexts simultaneously, within the same or different threads,
without affecting other instances that have been loaded into memory. For frequently changing data
page references, it’s possible to reuse an instance of a data page that’s already in memory. There is no
need to hard-code and maintain references to data sources.
Make sure the names are descriptive, so you and other developers can see easily what sort of values
they expect.
For each parameter you can set its type, whether it is required, and other settings. Setting a parameter
to required, for example, changes the way the data page provides data to an auto-populated property
(see below). An auto-populated property only attempts to load the data page that is supposed to
provide its data when the required parameters for that data page have values. On the other hand, if
there are no required parameters for the data page, an auto-populated property references the data
page immediately, as soon as one of the listed (optional) parameters is set.
The data page uses the parameters on the Definition tab in two main ways:
The data page returns to the property (in this case, a single page) information related to the customer
whose CustomerID the property referenced in the "Parameters" section. The asterisk beside the field
label indicates that a value for this parameter is required when referencing this data page.
The syntax is: <data page name>[<comma delimited list of parameter name:value pairs>]
You can refer to a data page with one of several valid forms of this syntax. For D_Customer, you could
load or refer to the data page using any of these:
D_Customer[CustomerID:“ANTON”,CompanyName:"BigCo"]
D_Customer[CustomerID:.CustID]
D_Customer[CustomerID:param.CustID]
When the data page only has one parameter, you don't have to specify the parameter name. You only
need to specify the value:
D_Customer[“ANTON”]
D_Customer[.CustID]
D_Customer[param.CustID]
See How to create a data page (declare page) and Tailor data pages to the context in which you use
them.
If there is no instance of the data page on the clipboard, or if the refresh strategy defined on the data
page requires loading a fresh instance, or if the parameters passed do not match the parameters used
to create an existing instance on the clipboard, then the data page loads a new instance of itself onto
the clipboard with the parameterized data that the current call requires.
Each time the data page finds it can respond with an existing instance of itself already in the clipboard,
PRPC saves time and effort by not having to go back to the database for data.
Sample approach
As a simple example, imagine that you are going to display a list of products on a page so a customer
can select from them and put the selections in a shopping cart:
You have a preferred provider, Northwind. If the customer does not want to select from their products,
the customer can use a more general Google search to get a list of products.
You can deliver both (or multiple) provider product lists by using a single data page with multiple
possible data sources. Depending on the parameters (including which provider to use), the data page
creates a tailored instance of itself on the clipboard. The instance has the data about the products that
the specific provider offers, and no other data.
The Definition tab of the data page lists the possible data sources for instances of the data page:
If the reference specifies the Northwind product list (so the When condition NorthwindSearchProviderSelected
evaluates to true), the data page requests data from the NorthwindProductList data source (a Report
Definition) and creates an instance of itself on the clipboard with the data returned from that source (or,
depending on the data page's refresh strategy, it may use an already-existing instance of itself on the
clipboard that was created using the same parameters and conditions).
On the clipboard, you can see the instance of the data page created to respond to the reference:
The page pxDPParameters in the image above holds the properties and values sent with the data page
reference, which permit tailoring the data page instance:
Let's say the customer does a second search, using the Google option. The reference goes to the same
data page; it uses the parameters sent to evaluate the When condition NorthwindSearchProviderSelected to
false. The data page then uses the other data source option to create an instance of itself on the
clipboard (or locate an existing instance that matches the parameter values of the reference and the
refresh conditions of the data page), and in that instance, provides data that matches the parameters
sent with the reference.
If, after that step, the number of instances of the data page still exceeds the set limit, the system
tolerates an overload up to 1250 instances. If the number exceeds that limit, the system deletes
instances (irrespective of when they were last accessed), up until the number of entries in the cache is
below the set limit.
The data page creates new instances as needed to respond to references and to replace the deleted
instances.
Additional Information
How to create a data page (declare page)
Use parameters when referencing a data page to get the right data
At a high level, when the application invokes the data page, it can send values for one or more data
page parameters. The data page can use the parameter values to select which of its data sources to
use to respond to the current call, and which data from that data source to return.
Calling data pages with parameter values simplifies design and development and promotes code reuse,
since a single data page can serve as a hub, quickly assembling and delivering the right set of data for
a wide range of calls. Using parameters eliminates the labor and maintenance cost of creating and
maintaining hard-coded calls for data.
To make use of multiple data sources, you need to do the following on the data page:
If you specify multiple sources, a field appears where you identify the When condition (see below) that
evaluates whether the current reference to the data page requires using the source identified in each
row. The condition for the final data source is set as "Otherwise": that data source is used if all the
preceding When conditions evaluate to false.
If there is more than one source listed, you can delete all but one. Click the "X" at the right of the
information about the source you want to delete, to remove it.
You can drag data sources higher or lower in the list to set the order in which the system checks
whether they should be used.
See How to create a data page (declare page) and the help documentation for details about what
information to provide in each field.
Specify parameters
When the system references a data page, it can pass one or more parameters that the data page can
use to select exactly the data the system requires. Set those parameters on the Parameters tab.
See Use parameters when referencing a data page to get the right data.
If it is, the data page provides (and appropriately transforms so the application can use it) data drawn
from the Northwind data source.
With these steps, the data page is prepared to respond to references sent with parameter values by
returning data designed for the needs of each reference.
The data page can create a separate instance of itself on the clipboard for each time it is referenced
with unique parameter values, or restrict the number of data page instances to one, so each new
references overwrites the data page instance on the clipboard.
When the team is working to develop an application, it is not unusual for one developer to get ahead of
others and need access to resources that will come from work other developers have not yet
completed. For example, if your team is building a weather widget, the team members building the UI
may want to see how their design works with data in place before the team members working on the
connector to the data provider have finished their work.
This does not have to be a blocker to UI development. Data pages allow you to specify sources of
sample or simulated data to use until the connectors you really want to use are ready to provide actual
data.
In the Data Explorer, locate the data page you want to work with; or create a new data page in the
appropriate data object type.
Check the Simulate Data Source checkbox. The system saves the actual data source for future use
(to the right in the image below) and lets you select the type and the source for the simulated data that
you have prepared.
If you uncheck the checkbox, the system restores the actual data source specification, so when you are
ready to move from simulated to real data, you only have to uncheck the checkbox and save the data
page.
If you have not already identified a data source for the data page, you can still create one that uses
simulated data:
Continue as described above to specify simulated data. When the correct data source information is
available, you can update the data source entry and either switch to using real data or continue using
the simulated data while you continue development. The property or section referencing the data page
does so in the same way whether or not the data page is providing simulated data.
For more information about referencing a data page from a property, see Use parameters when
referencing a data page to get the right data and Tailor data pages to the context in which you use
them.
Report Definitions define reports. These reports, while typically used to display summarized information
to a user, can also be used to select values for various internal and display features, including data
pages.
Suggested Approach
1. Prerequisites
In this example, the Report Definition report uses data from a simple data table. However, Report
Definitions can report on any classes from the PegaRULES database, or on flat tables in an external
SQL database. For details on using data tables see How to use data tables to reference external
and internal systems.
In this example, you populate a data page using a Report Definition that reports on StateCodes, a
pre-existing data table consisting of two properties: two-letter state codes, and an operator ID
assigned to process work items that involve that state. Both properties are exposed as database
columns. (This report output is static, but reports may in practice produce different, real-time
results each time they run.)
Create a new Report Definition. In the Report Definition: New dialog, enter the name of the data
class containing the data table in the the Applies To field. Use the Smart Prompt in this field to find
your data class more easily.
You will use the name of this report when creating the Declare Page.
3. In the Report Definition form, add columns as necessary to populate the data page. The data table
in this example contains only two columns, .StateCode and .AssignedOp.
Complete the rest of the Report Definition form, ignoring fields dealing with the visual display of
the report or sorting, and save the report.
For more information on completing the Report Definition form, see Report Definition rules -
Beyond the basics.
Continue with creating the data page for Pega 7, or creating the declare page for PRPC 6.2-6.3 SP1.
2. You can then click either menu icon (the general one for the data explorer, or the one beside a
data object type you have selected--marked with red rectangles in the image below) and use the
option to create a new data page.
3. Make sure the data page structure is List.
In the Data Sources section, select Report Definition for the Source, then select the Report
Definition you created from the options in the Name field.
4. Save the data page. The data page and contained properties are now ready to be referenced by
your application.
Return to step 3
Return to step 3
For example: create a new decision, using the a property from the data page you created as one of the
decision criteria. Save and unit test the decision shape. An instance of the new data page is now visible
on the clipboard tool in the Data Pages node (previous to 7.1, in the Declared Pages node).
For more information on creating and referencing data pages in your application, see Understanding
data pages.
Data virtualization
You might need to connect a data page of one object type to a data source of another, incompatible
type. Pega® 7 lets you do this quickly and relatively simply with a data transform, enabling data
virtualization.
Changing, adding, or removing an integration point that a data page is sourced from means that you
only have to modify, add, or remove a mapping data transform.
In this example, the data page is of one type and the data source is of another, incompatible type.
You can identify a response data transform of the same type as the data page to map the data from the
data source to the data page, making it usable for the application:
The data transform acts automatically on each reference to the data page using that data source,
mapping the data in the manner you specify:
A data page can have multiple data sources that it uses in different circumstances, depending on the
parameters it receives with each reference. See Manage multiple data sources for a data page.
Regardless of which data source the situation requires, the data page maps the data it receives to the
one common application data model.
While data pages load synchronously by default, you can more easily set them to load
asynchronously so users can take action on a work item while other content is still loading.
There are significant changes to the data page rule form. See Data management: what's new in
Pega 7.
In PRPC 6.3 SP1, you can configure non-blocking user interfaces using Asynchronous Declare Pages.
This is useful for pulling in external data from systems of record, web services, and other PRPC systems.
This supporting information, such as account history, purchasing history, business analytics, and local
weather, can display alongside a work item.
In previous PRPC releases, you could configure such data to display in defer loaded sections — the work
item displayed first and defer loaded sections displayed as they became available. However, until all
defer loaded sections were visible, users could not perform an action that required interaction with the
server, for example, Submit.
Using Asynchronous Declare Pages, you can enable a user to take action on a work item while other
content is still being loaded. Defer loaded Asynchronous Declare Pages use a different browser
connection than the main requestor servicing the work item.
Asynchronous Declare pages cannot run declarative expressions, triggers, and other rules that belong
to a declarative network. For example, you can enable executing declarative expressions in a
background requestor; but if the declarative expression refers to properties defined in external named
pages which are not present in the background requestor, then the declarative expression may not
execute.
guidelines for configuring defer loaded Asynchronous Declare Pages — see Developers:
Configuring Non-blocking User Interfaces
information on tuning requestor pooling to ensure an optimal user experience — see System
Administrators: Tuning the Requestor Pool
information on configuring WebSphere — see System Administrators: Configuring WebSphere
The following user interface shows a work item with asynchronously defer loaded sections.
The user can interact immediately with the work item, typing in the text box. Since the other sections
are asynchronously loaded, the user can even click Submit to process the action before the defer loaded
sections display.
The sections using defer loaded Asynchronous Declare Pages contain information of interest to the user,
but not critical in processing the work.
This article describes how developers can use Asynchronous Declare Pages to configure non-blocking
user interfaces. It also illustrates how system administrators can monitor and modify requestor pool
settings as necessary.
This checkbox is no longer available, and is not required, in PRPC 7.1. See the note at the top of the
page.
If you selected Load Activity as the Data Source for the Asynchronous Declare Page (ADP), include all
pages used by the Load Activity in the Pages & Classes tab of the Declare Page form. This is required
because when an Asynchronous Declare Page is invoked, before invoking the Load Activity, the pages
specified in the Pages & Classes tab are copied from the requesting thread specified in the ADP request
to the temporary background ADP thread. The Load Activity is then invoked and the populated Declare
Page is copied to the requesting thread. Section Defer Load is invoked on the requesting thread. The
pages copied to the temporary thread are not copied back to the requesting thread.
3. Click Save.
Configure the section using the Asynchronous Declare Page as its source
1. Create a section and include the section that uses the Asynchronous Declare Page as its source, in
this case, CardHolderInfo.
3. On the Advanced tab, specify the name of the Asynchronous Declare Page in the Using Page field.
In this case, the name of the Asynchronous Declare Page is Declare_CardHolderInfo.
All UI references to a Declare Page should be contained within the deferred UI. References to Declare
Page data, such as Visible When, parameters to actions, or property displays (read-only or editable), will
initiate synchronous Declare Page load. UI display will be delayed until the Declare Page is loaded.
4. In the Event Types to Trace area, select the Interaction and ADP Load checkboxes.
5. Review the trace. The first ADP trace of a background thread (requestor) that loads a declare page
asynchronously displays a link to a pop-up window showing the corresponding trace lines for that
requestor session. In the following example, clicking the Async DP Load link in line 2, the Declare_Binaries
Step Page , displays a one-time view of the Load Activity.
You can use the System Management Application to help you determine optimal pooling settings for the
Async Service Package:
Click the Reset button to reset these ADP counters. Clicking Clear Requestor Pool disrupts the
current activity and is not recommended.
5. In addition, you can use the following information to determine if you need to adjust the requestor
pool settings:
The value of the following fields should be zero (0). For web containers that support APIs for thread
management, PRPC automatically sets the maximum threads to the value of the Maximum Active Requestors
specified on the AsyncDeclarativePool Service Package Pooling tab.
However, for application servers in which you manually set the value of the maximum number of
threads for the PRPC node, a configuration error, in which the maximum number of threads is less
than the Maximum Active Requestors, is possible. A non-zero value in these fields indicates a configuration
issue.
Max Wait
Longest Wait
Timeouts
Tip: You can also use the alert log file to help you determine optimal pooling settings. To open the alert
Click an alert to display additional information. The following alert indicates that you may want to
increase the Maximum Active Requestors to decrease the wait time.
See Understanding the PEGA0043 alert: Queue waiting time is more than x for x times for details.
To adjust the alert thresholds, modify the values of the following in prconfig.xml:
alerts/ADP/queuewait/thresholdtime— indicates the wait time in the queue for a specified number of
requestors
alerts/ADP/queuewait/thresholdcount — indicates the number of times that the wait time
(ADP/queuewait/thresholdtime) can be exceeded before an alert is raised
For more information about alerts, see Understanding alert log message data and Performance alerts,
security alerts, and AES.
For instructions on editing prconfig.xml, see How to change prconfig.xml file settings or How to set
prconfig values in a Dynamic System Setting value.
1. Select > Integration> Resources then click the Service Packages button.
4. In the Work Managers area, click the PRPC node deployed on the current server, in this case,
PRPCWorkManager .
5. Specify the Maximum number of threads for the selected PRPC node, then click Apply. Note the
Maximum number of threads . You will need this value to determine the maximum number of database
connections.
4. On the server page, select Thread Pools in the Additional Properties section.
5. On the Thread pools page, select WebContainer.
6. Specify the Maximum Size, then click Apply. Note the maximum number of threads. You will need this
value to determine the maximum number of database connections.
To determine the maximum number of simultaneous requests, add the value of the Maximum Size (threads)
for the web container to the value of the Maximum number of threads for the PRPCWorkManager.
For example, if the maximum number of threads for the web container were set to 100 and the Maximum
number of threads for the PRPCWorkManager were set to 50, then the maximum parallel load on the PRPC
server would be 150 requestors. In this case, you would set the value of the maximum number of
database connections to at least 45, 30% of 150.
To determine these values, refer to the previous sections. For information about the:
maximum number of threads for the web container, see Configure the maximum number of
threads for the web container.
Maximum number of threadsfor the PRPCWorkManager, see Configure the maximum number of threads for
the PRPCWorkManager in WebSphere.
3. Specify the Maximum connections. This value should be at least 30% of the maximum number of
simultaneous requests on the PRPC server, where each thread takes two to three database
connections.
If you set the value="false", asynchronous load of declare pages is disabled. However, the Load this Page
Asynchronously? checkbox still displays on the Declare Page Definition tab. If the value in prconfig.xml is set
to false and a user selects the Load this Page Asynchronously? checkbox, the page will be loaded synchronously.
For instructions on editing prconfig.xml, see How to change prconfig.xml file settings or How to set
prconfig values in a Dynamic System Setting value.
Additional information
How to define the contents of a Declare pages rule using a Report Definition rule
To use a JSON data transform, create a JSON data transform and configure the response or request data
transform created by the REST Integration wizard to use it. For instructions on configuring a response
JSON data transform, see the following instructions. For instructions on configuring a JSON request data
transform, see Configuring a JSON data transform as a response data transform in data pages.
Configuring JSON request data transforms has the following high-level steps:
1. Open the data page created by the REST Integration wizard by clicking the link in the Data Page
Created field in the Generation Summary page.
2. Click the Parameters tab.
3. In the Name field, enter the name of the data page, for example, pageName .
4. In the Data type field, click String.
Data page Parameters tab
1. In the Application Explorer, locate the class that the response data transform is in, for example,
Code-Pega-List.
2. Right-click the class, and click Create > Data Model > Data Transform.
3. In the Label field, enter a short description.
4. In the Data model format field, click JSON.
5. In the Add to ruleset field, select the ruleset. It is recommended that you use the same ruleset as
the data page.
1. Open the request data transform generated by the REST Integration wizard by clicking the Open
icon next to the Request Data Transform field on the data page.
2. Click the Parameters tab.
3. In the Name field, enter the data page name, for example, pageName .
4. In the Data type field, click Page Name.
5. Click the Pages & Classes tab.
6. In the Page name field, enter the data page name, for example, pageName .
7. In the Class field, select the same class as the request data transform.
8. Click the Definition tab.
9. Delete row 2 by clicking the Trash can icon next to row 2.
10. Create a new row 2 by clicking the Add a row icon.
11. In the Action field, click Update Page.
12. In the Target field, enter the data page name, for example, pageName .
13. In step 2.1 in the Action field, click Apply Data Transform.
14. In the Target field, enter the JSON request data transform that you just created.
Data page
Message data
To enable caching, you pass the JSON data to the data page and call the data transform outside of the
data page.
1. From the Records Explorer, click Data Model > Data Page.
2. Click the data page used by the request data transform to open it.
3. Click the Parameters tab.
4. Delete the pageName parameter by clicking the Trash can icon.
5. In the Name field, enter jsonData.
6. In the Data type field, click String.
To use a JSON data transform, create a JSON data transform and configure the response or request data
transform that was created by the REST Integration wizard to use it. For instructions on configuring a
request JSON data transform, see the following instructions. For instructions on configuring a request
JSON data transform, see Configuring a JSON data transform as a request data transform in data pages.
Configuring a JSON response data transform has the following high-level steps.
1. In the Application Explorer, locate the class that the response data transform is in, for example,
Code-Pega-List.
2. Right-click the class and click Create > Data Model > Data Transform.
3. In the Label field, enter a short description.
4. In the Data model format field, click JSON.
5. In the Add to ruleset field, select the ruleset. It is recommended that you use the same ruleset as
the data page so that all data assets are packaged in the same ruleset. You can use a different
ruleset, however, if you want to package and move the data assets to another application, having
them in the same ruleset ensures that all data assets are moved.
It is not recommended that you delete this data transform. Modifying it to use the new JSON response
data transform is the fastest way to access the data page. In addition, step 3 is required for error
handling.
1. In the response data transform, delete step 2, Append and Map to, by clicking the Trash can icon to
the right of the step.
2. Click Submit when asked to confirm the deletion.
3. Click in row 1.
4. Click the Add a row icon to create a new step 2.
5. In the Action field, click Apply Data Transform.
6. In the Target field, select the JSON response transform that you just created.
Response data transform updated to use new JSON response data transform
Error handling in data pages is a complex problem. There are several ways to handle errors, and various
ways are relevant at different parts in the process. Pega 7 provides developers with the tools that they
need to create custom error-handling responses to specific data page errors without using activities.
Error occurrences
Data page errors occur for a variety of reasons, but they all prevent data from being loaded as
expected. Examples of causes of data page errors include system errors, invalid input, using keys that
are not on a keyed page list, connection problems, and security or authorization issues.
Error types
There are two types of data page errors: invocation errors and data source errors. The following articles
provide more information about handling these errors.
When a data source error occurs in any data source other than an activity or data transform, the page
property pyErrorPage is added to the data page. This property contains all error details so that users do
not have to track them down across the various error properties and page messages used by connector,
report definition, and lookup data sources. It contains the following information:
Rule Description
A message about the error that was encountered during data source processing that
forced it to stop. This message is informative and can be made into a page message
.pyStatusMessage
shown to users as an invocation error. This is also the value that is put into the
default page message added on error.
.pyStatus and .pyStatusMessage are always populated on error and can be used with any data source.
When the source type is a connector of type SOAP, SAP, or dotNet, and the error was caused by a SOAP
fault, the following properties are also populated:
Rule Description
Contains the code from the SOAP fault. See the W3C page on SOAP Fault Codes for
.pyFaultCode
more information.
Contains an explanation of the issue causing the SOAP fault and is intended to be
.pyFaultReason shown to users or put in a message. See the W3C page on SOAP Reason Elements for
more information.
Rule Contains details from the SOAP fault, Description
if provided, including application-specific error
.pyFaultDetail information. See the W3C page on SOAP Detail Elements for more information.
Two additional properties are available for more advanced use cases, such as custom connector logic or
load activities. These properties can provide additional detail in page messages:
Rule Description
When the data page has messages, this page list contains the message
details, one page per message. The property .pyDescription contains
the message.
.pxMessageSummary.pxErrors
If the message is attached to a property, the property .pyLabel contains
the property reference.
When making a new response transform, several error handling actions are added automatically, as
displayed in the following example:
A reference to a when rule, pxDataPageHasErrors, is already included, and a place to put error handling
logic is nested beneath it. This when rule is the primary tool for conditionally executing data source
error handling logic in the same way that you use hasMessages for handling invocation errors.
To use the default error-handling setup, right-click the when step and click Enable. Then, add your error-
handling logic in the nested steps below. You can also add it in a separate data transform that can be
used across multiple data sources and call it from the nested steps.
When the class of the data source matches the class of the data page, the response data transform is
optional, because the data source can be run against the data page directly.
When a response data transform is specified and the data source class matches the data page class,
enable Run on data page. This executes the response data transform directly on the data page rather
than on the Data Source page, saving the memory and processing cost of an unnecessary page and
mappings.
Example of a response data transform with Run on data page enabled
You can either copy the logic from the template into your response data transform or save the template
to your class and ruleset to create your own error handling transform that can be reused across
sources. All rows start in a disabled state, so remember to right-click and enable actions that you want
to use and customize.
You can also pull in data from a different data page instead of using the normal DataSource page when
it has errors. Examples of this include:
Referencing another data page to pull data from an alternative data source
Referencing another data page with the same data source to use for retry
Do not attempt to reference the currently loading data page from within the response data transform.
This method does not work and can cause errors when you try to load the data page.
In addition to pyErrorPage, all data pages have an embedded page property called pySourcePage. This
property contains up to five properties that provide detailed information about the data source that is
used to load the data page:
Rule Description
Specifies one of the following source types: Connector, Report Definition, Lookup,
.pySourceType
Data Transform, or Activity.
.pyConnectorType If the source type is connector, this rule specifies the type of connector.
Specifies the identifier of the connector, report definition, data transform, or activity
.pySourceName rule used as the data source.
This is empty for lookup data sources, because they do not use a rule.
Rule Specifies the Applies To class of the Description
connector, report definition, data transform or
.pySourceClass activity rule used as the data source.
For lookup data sources, this is the identifier of the class used for the lookup.
Specifies the index of the data source on the data page form at the time of load.
.pySourceNumber
The topmost source is 1, and the indexes increase going down the form.
Unlike pyErrorPage, pySourcePage is always present on loaded data page instances. You can always
reference it from your data transforms or activities or view it in the clipboard to see which source was
executed.
pyErrorPage is created and added to the step page on error whenever a Connect-* method is used
to call a connector. It has the same information described in Data Source Errors. Therefore, you
can use the when rule pxDataPageHasErrors against the step page of the Connect-* step to check
if there is an error.
Error information is captured only for the connector types that are supported as data sources on the
data page form. For other, more advanced connector types, you might need to look at the form to see
what other error information is available and what properties contain the information.
This feature does not support full clipboard page functionality; use with caution.
Supported functionality
Basic page and property access (read and write properties) for all normal data types
Hierarchical data page structure (pages within pages)
Dictionary validation mode
Read-only data pages
Unsupported functionality
Declarative rules
Page messages
Complex property references
Saving pages to a database
API access to the data page
Connector: Use a response data transform to detect and handle Connector data source errors.
Report Definition: Similar to Connector, use a response data transform to detect and handle Report
Definition data source errors.
Lookup: Use a response data transform with the Run response data transform on error check box
selected to detect and handle Lookup data source errors.
Data transform: Use the hasMessages when condition to detect and handle data transform data
source errors.
Activity: Use appropriate transition conditions such as StepStatusFail in activity steps to detect and
handle activity data source errors.
1. Create a data transform (for example, MyCoErrorHandlerMaster) by saving the default data
transform pxErrorHandlingTemplate to your top-level class and ruleset. Also, change the status of
the rule from Final to Available.
2. From the data page, pass a parameter (for example, Connector-GetCustomerData) to the response
data transform to uniquely identify the data source that is used to load the data page.
3. On the Definition tab of each response data transform, use the when condition
pxDataPageHasErrors to identify any errors in the data page and apply the
MyCoErrorHandlerMaster data transform.
4. In the MyCoErrorHandlerMaster data transform, create and call a decision table that determines
the appropriate error handling based on the data source.
5. Based on the decision table, perform the error handling action for the data source in the
MyCoErrorHandlerMaster data transform.
6. In the MyCoErrorHandlerMaster data transform, create and call a decision table to map user-
friendly error messages instead of the default error messages, as shown in the following figure:
This data transform can now be used across multiple data sources and data pages in the application.
For example, with an activity data source, the same data transform can be used to handle data page
errors.
In use cases that require a manual retry during data page error, consider using the hasNoMessages
when condition with Do not reload when in the data page rule form so that the data page is reloaded on
retry whenever there is a data page error.
Handle other invocation errors procedurally in flows, post-flow action processing, or activities, as
appropriate for the specific requirement.
Debugging tips
To trace Data Page and Data Transform rules, open the Tracer tool directly from the rule form
To trace date pages that are loaded asynchronously, open the Tracer tool from the Data Page rule.
Declarative rules
Page messages
Complex property references
Saving pages to a database
API access to the data page
Errors Causes
Required A data transform tries to modify properties in a case that is based on values from a
parameters data page, but the data is not returned when the transform is run because a required
missing parameter is missing.
A flow has a property with automatic data access enabled to copy the data that it
Data source needs to proceed in the work item.
error cannot be
handled When the user attempts to proceed to the next step, a data source error that could
not be handled at the data layer prevents the user from proceeding.
A case shows inputs for properties that are used as keys in a property with automatic
data access enabled to pull the rest of the data into the case.
Keys not found
When the user enters the properties, the data is not returned and an error is shown
because the keys were not found in the list.
When an invocation error occurs, the requested data page instance is marked with page messages that
explain the error. In situations where automatic data access for properties is used, errors can occur on a
property that refers to data on a data page or a property that copies data from a data page. (For more
information, see Load data into a page list property.)
In these situations, the errors might be on the associated data page instance. For example, in the case
above where the user uses a key that is not on the data page, the page messages are on the property
instead of on the data page itself because the data page was loaded without issues.
You can also defer load activity on a section with a page context of the data page or auto-populated
property.
Use these tools to handle the errors in the locations mentioned above:
Apply the hasMessages when rule to the page that you are trying to use. Any error-handling logic
that you include after the when rule is run only in the case of an invocation error and can be used
to handle the error.
When data is pulled into the case from a property with automatic data access enabled, use the
function @(Pega-RULES:Default).getMessagesAll() in the context of that page property to get all invocation
errors that occurred as text separated by new line characters. You can then write error-handling
logic based on what occurred.
If you use the data page directly, or you want to iterate through all errors on the case, including
invocation errors from auto-populated properties, you can iterate through the property
.pxMessageSummary.pxErrors in a data transform or activity.
Examples of handling invocation errors
Handling invocation errors is typically use case-specific and each application has its own requirements.
The following table lists some common error-handling tasks:
Task Procedure
Preventing case
Case processing already has built-in validation that prevents users from continuing
processing from
when the case has errors.
continuing when
the data page has
To make use of this, use an auto-populated property embedded within the case
an invocation instead of referencing the data page directly.
error
After a data page instance has been loaded, regardless of whether it has errors, it is
not cleared or refreshed unless its refresh strategy dictates or it is procedurally
removed.
To retry the data page, either manually remove the original instance or specify the
Retrying a data
hasNoMessages when rule in the Do not refresh when field. Re-referencing the data
page
page then results in a retry.
The only exception to this is using a property with automatic data access by copy. In
this case, remove both the property and the data page from which it is auto-
populating to force a retry.
To call an alternative data page or data source, add conditional processing that
uses a different data page or different parameters if an invocation error occurs. You
can do this for items such as data transforms, activities, and sections by using the
hasMessages when rule.
Calling an
alternate data If the hasMessages when rule returns true for the data page against which it is
page or data executed, than an invocation error has occurred. You can conditionally change the
source parameters so that a different data source is called and retry or use another data
page entirely for your logic in that case.
The simplest way to do this is to create a data page of class Data-Corr-Email that
accepts the error information that it needs to create the email as parameters and
fills in the administrator’s information and any other basic details. You can then
Notifying an pass that data page to the function @Default.SendEmailMessage(Page) to send an email
administrator that from the data transform, activity, or other processing rule in case of an invocation
an invocation error.
error has occurred
You can even have the data page itself call this function as a part of its load to give
yourself the ability to email the administrator no matter where the error occurred,
even if you are working with a section or UI rule.
A number of other error-handling tasks can be configured, but the same basic pattern applies:
Use the hasMessages when rule to conditionalize processing based on whether an invocation error
has occurred.
Use additional data pages to handle invocation errors, because data page references can occur
anywhere and you want to be able to handle errors wherever they are.
For error handling that can only be done in the flow or an activity (such as the last two above),
make sure that you first reference the data page and conditionalize your processing in a place
where you can handle errors.
When the system references a data page, the data page either creates an instance of itself on the
clipboard and loads the required data in it for the system to use, or responds to the reference with an
existing instance of itself. For a general introduction to data pages, see Understanding data pages.
The system manages data pages based on a combination of settings and circumstances as outlined
below. The system automatically deletes, or prunes, data pages that are no longer needed, or when the
maximum number of data pages is reached.
The system creates a data page instance when the data page is referenced, or uses an existing
instance, depending on the settings in the Refresh Strategy section of the Load Management tab on the
rule form. See Refresh strategy for data pages.
Single use
Clearing unused pages
Data page instances for a container reaches a set limit
When the check box is selected, each time the system references the data page, it removes any
existing instance and uses submitted parameters to create a data page instance.
If the check box is not selected, the system creates an instance of the data page for each reference with
unique parameter values, which can cause the number of stored instances of the data page to increase
rapidly.
This option is useful for parameterized data pages. See Use parameters when referencing a data page
to get the right data.
The system uses any setting in these fields. If the fields are blank, the system uses the value of the
dynamic system setting DeclarePages/DefaultIdleTimeSeconds, which is set by default to 86400
seconds, or one day. You can adjust the dynamic system setting value.
The number of data page instances for a container reaches the set limit
By default, the Pega 7 Platform can maintain 1000 read-only unique instances of a data page per
thread. You can change this value by editing the dynamic system setting datapages/mrucapacity .
There are different data page instance containers for the thread, requestor, and node level. Each user
can have both requestor-level and thread-level data pages up to the limit established. Additionally, each
node can have any number of requestors, and each requestor can have many threads. See Contrasting
PRThread objects and Java threads.
If the number of instances of a data page reaches 60% of the established limit for thread-level or
requestor-level containers, or 80% of the established limit for node-level containers, the system
begins deleting older instances.
If the number reaches the established limit, the system deletes all data page instances that were
last accessed more than 10 minutes ago for that container.
If, after that step, the number of instances of the data page still exceeds the set limit, the system
tolerates an overload up to 125% of the established limit for thread-level or requestor-level
containers, or 120% of the established limit for node-level containers.
If the number exceeds that overload number, the system deletes instances (regardless of when
they were last accessed) until the number of entries in the cache is below the set limit.
The data page creates instances as needed to respond to references and to replace the deleted
instances.
As the count of data page instances approaches the limit, the system displays the PEGA0016 alert. See
PEGA0016 alert: Cache reduced to target size.
This pruning behavior is always active. You can opt to have either or both of the first two methods
active for any data page.
Thread-scoped pages — the system removes all instances of the data page from all threads of
the current requestor.
Requestor-scoped pages — the system removes all instances of the data page from the current
requestor.
Node-scoped pages — the system removes all instances of the data page from all nodes in the
cluster.
In an activity, use the Page-Remove step with the data page as the step page. This method deletes
read-only and editable data page instances regardless of the scope, as long as the data page is
accessible by the thread that runs the activity.
Use the ExpireDeclarativePage rule utility function that takes the data page name as a parameter
to delete read-only, non-parameterized data page instances:
For Thread-scoped data pages, the system removes data page instances from the current
thread of the requestor.
For Requestor-scoped data pages, the system removes data page instances from the
current requestor.
For Node-scoped data pages, the system removes data page instances from all nodes in the
cluster.
As a best practice, do not use the ExpireDeclarativePage rule utility function to remove a data page.
This function is soon to be deprecated.
Passivation
Passivation allows the state of a Java object — such as an entire Pega 7 Platform PRThread context,
including the clipboard state — to be saved to a file. A later operation, known as activation, restores the
object.
The Pega 7 Platform uses standard passivation in general operation, but you can also configure
passivation to shared storage in highly available environments. When all or part of a requestor
clipboard is idle for an extended period and available JVM memory is limited, the Pega 7 Platform
automatically saves clipboard pages in a disk file on the server. This action frees JVM memory for use by
active requestors. (Typically, such passivation occurs only on systems that support 50 or more
simultaneous requestors.)
The system passivates editable data page instances, but discards read-only data page instances.
For more information about passivation, see Creating a custom passivation method.
Set up or clean up the clipboard if you are running a test for which the output or execution depends on
other data pages or information. Click the Setup & Cleanup tab on the data page unit test case page to
configure setup and cleanup information.
Setup & Cleanup tab
For example, when you run a test, you can use a data transform to set the values of the pyWorkPage
clipboard page with the AvailableDate, ProductID, and ProductName properties. Before the test runs,
these values are retrieved from pyWorkPage and placed on the data page that you are testing.
After you run a data page unit test case, data pages that were used to set up the test environment are
automatically removed. You can also apply additional data transforms or activities to remove other
pages or properties from pages on the clipboard before you run more tests. Cleaning up the clipboard
ensures that data pages or properties on the clipboard do not interfere with subsequent tests.
For example, you can use a data transform to clear the AvailableDate, ProductID, and ProductName
properties from the pyWorkPage clipboard. Clear these values to ensure that the test uses the
appropriate information if the setup data changes for subsequent test runs. If you change the value of
the AvailableDate to May 2016, the data page uses that value, not the older value (December 2016).
Robot Manager 5 is required for using an RPA to source a data page. Download Robot Manager 5 from
Pega Exchange.
At run time, the data page invokes a REST endpoint that is hosted in Robot Manager. Using the data
input from the data page, Robot Manager queues a request to be executed by an RPA robot. The RPA
robot returns the result to Robot Manager, which in turn passes the result to the data page as a REST
response. The run-time architecture is shown in the following diagram.
Run-time architecture
When Robot Manager receives an automation status of "Completed with errors" or "Did not complete",
the assignment is treated as a failed automation. The original assignment is completed, and a new
assignment is opened and routed to the Failed Robot Automations workbasket. The Failed Robot
Automations workbasket can be changed to suit your business needs. Because the original assignment
is considered complete, any data returned by the robot is also returned to the data page. For this
reason, map the pyAutomationStatus property to the data page so that you are aware of the
automation status.
If the robot returns data to Robot Manager that violates the validation criteria specified on the original
assignment, the original assignment remains open, and Robot Manager returns an HTTP response code
of 500 that indicates that the data cannot be populated on the data page.
1. In Pega Express or Designer Studio, build a simple case type to model the overall robotic
automation. Include the input fields that you want to pass to the RPA robot in the case data model,
for example, customer ID, mailing address, or account number. In addition, include the output data
fields that you want the robot to pass back to your Pega application, for example, credit score,
account balance, or claim number.
2. In Pega Express or Designer Studio, build a simple case life cycle. The case life cycle must include
the Route to robot smart shape. The smart shape queues the automation request to the RPA robot.
This is the case that will be referenced on the data page.
3. In Pega Robotic Automation Studio, build the automation logic for the robot to execute. Once your
automation is completed, reference the robot activity name in the Queue for robot smart shape.
4. Configure the location of the Robot Manager host server and the authentication profile to use when
connecting to it.
5. Configure the data page.
1. From the Records explorer, click SysAdmin > Dynamic System Settings.
2. Search for the pegarobotics/RoboticAutomationRequestorProfile setting.
3. In the Value field, enter the name of the authentication profile that the REST connector will use to
connect to the REST service.
4. Click Save.
5. Search for the pegarobotics/RobotManagerHostDomain setting.
6. In the Value field, enter the domain details and the HTTP scheme of the Robot Manager host
server, for example, https://localhost:8443.
7. Click Save.
1. From the Data Type Explorer, expand the data type, and click the data page that you want to
source with an RPA.
2. In the Source field, click Robotic automation.
3. In the Robotic automation field, enter the name of the case type that you created.
4. In the Timeout(s) field, enter the length of time that the data page will wait for data to be returned
before timing out.
5. In the Request Data Transform field, enter the request data transform to use to provide input data
to the robot.
6. In the Response Data Transform field, enter the response data transform to use to convert the
results returned from the robot to the logical model used by the data page.
If you want to know the status of the automation to determine what action to take, configure your
response data transform to read the pyAutomationStatus property. The status can be Completed,
Completed with errors, or Did not complete.
Example
The following example shows the case type data model, data page, request data transform, and
response data transform for a data page that is sourced by an RPA that gets a customer's credit score.
The data model for the Get credit score case type has two fields, Account id and Credit score. This
information is used to configure the data transforms.
The request data transform passes in the customer ID and puts it into the Account id field that is
defined in the case type.
The response data transform takes the credit score from the physical data model and puts it into the
CustomerCreditScore field in the logical data model.
Response data transform
At run time, the browser requests the data page to be loaded. The data page notifies the client to run
the requested RDA automation, and then the browser notifies the desktop robot to run the automation.
The data that is returned from the desktop robot is pushed back to the server, the server runs the
response data transform, finalizing the data page, and the browser is notified to continue loading the
user interface. The run-time architecture is shown in the following diagram.
1. In either App Studio or Dev Studio, create a data type . For more information, see Creating a new
data type.
2. Ingest the fields into Pega Robotic Automation Studio and build the automation logic for the robot
to execute. Make note of the robotic automation ID because you need it to configure the data
page. For more information, see Configuration of Robotic Desktop Automation (RDA) with your
Pega 7.2.1 application.
3. Configure the data page.
The automation is invoked only when the data page is requested from a browser; that is, the data page
must be the source for a control in the user interface. The automation is not invoked if it is referred to
by an activity or other rule running on the server.
1. In Dev Studio, from the Data Type Explorer, expand the data type that you created in step 1
above, and click the data page that you want to source with an RDA.
2. In the Source field, click Robotic desktop automation.
3. In the Robotic Automation ID field, enter the Robotic Automation ID of the automation that was
created in Pega Robotic Automation Studio.
4. In the Timeout(s) field, enter the length of time that the data page should wait for automation to
complete before timing out.
5. In the Request Data Transform field, enter the request data transform to use to provide input data
to the robot.
6. In the Response Data Transform field, enter the response data transform to use to convert the
results returned from the robot to the logical model used by the data page.
7. Optional. To write data back to the application with which the robot is interacting, configure a save
plan:
1. In the Data page definition section, in the Edit mode field, select Savable.
2. In the Data save options section, in the Save type field, select Robotic desktop automation.
3. In the Robotic Automation ID field, enter the ID of the automation that will write the data back
to the application.
4. In the Timeout field, enter the length of time to wait before timing out.
5. In the Data Transform field, select the request data transform to use to provide input data to
the robot.
8. Click Save.
After the data page is configured, you can use it in your user interface. The most common way to do
this is by using section include on a Section rule. You invoke the automation by selecting Use data page
in the Page Context field and selecting data page. Select Defer load contents to allow extra time for the
robot to fetch the data before rendering the user interface. For more information, see Harness and
section forms - adding a section.
Section rule cell properties
If you configure a save plan, configure the flow action that triggers the post processing that writes data
back to the application. For more information, see Saving data in a data page as part of a flow action.
The Access Group field (located on the Load Management tab of a data page in Pega 7 and on the
Definition tab of a declare page in earlier versions of PRPC) identifies the access group that PRPC uses
at runtime to locate the load activity that populates the page.
This access group must contain the appropriate RuleSets to provide access to the correct version of the
Load Activity, which populates the clipboard with instances of the data page or declare page.
Suggested Approach
The Access Group field is visible only when the Page Scope field on the Definition tab is set to Node.
The field is available on the Load Management tab of a data page in Pega 7:
It is available on the Definition tab in PRPC 5.1-6.3 SP1:
This approach avoids the following design issues: many users may see one data page (declare pages)
rule definition (for shared clipboard pages at the Node level). Due to rule resolution and different RuleSet
Lists, these users would run different versions of the Load Activity to create that page, or have different
rules called by that activity. Thus, the first user who called the data page (declare page) instance would
set it up using their access group, with their Load Activity and their data.
The next user who accessed that instance might not have the same access group; if not, they would
have to reload the page with their Load Activity and data.
To avoid continually reloading the page instance based on each user’s access group (which negates the
concept of “shared”), the access group to use is set in the data (declare) page instance. When the first
user calls this instance, the system switches to the specified access group and uses that RuleSet list to
run the Load Activity and create the Declare pages. Once the page is created, the system switches the
user back to their own access group.
Important: This means that whenever a user’s processing references a data (declare) page instance,
that instance will contain data which has been loaded with the RuleSet list determined by the data
(declare) page's access group – not necessarily what is available in the user’s RuleSet list. Changing
your own access group to get different data on the page instance has no effect.
Select an access group that provides the RuleSets and versions which have all the appropriate rules to
run the Load Activity.
Overview
Enabling keyed access
Load data into a page property from a list-structure data page
Load data into a page list property from a list-structure data page
Load data into a page list property from different instances of a list-structure data page
Overview
In general, when the system references a data page:
The system provides parameters that the data page can use to get precisely the data that the
object needs from the data source.
The data page verifies whether an instance of itself, created by an earlier call using the same
parameters, exists on the clipboard.
If the data page's refresh strategy permits, the data page responds to the request with the
data on the existing clipboard page.
Otherwise, the data page loads or reloads its data from the correct data source, using a
request data transform if necessary to structure the request so the data source can respond
to it.
The data source uses the information that the data page sends to locate and provide the data that
the application requires.
A response data transform maps and normalizes the data to the properties that require it.
The data page creates an instance of itself on the clipboard to hold the mapped data, and provides
it to the auto-populated property or direct reference.
To provide instant access to a particular page in a list-structure data page (such as a list of products,
with each embedded page holding information about a particular product), enable keyed access:
Return to top
3. At the right of the Definition tab for list-structure data pages is the "Keyed Page Access" section.
Check the Access pages with user defined keys checkbox; then, in the "Page List Keys" area,
define one or more keys for this data page:
4. You can select from the properties in the class of the data page, or click the magnifying glass icon
to create a new one.
If you want to use multiple keys (.SupplierID and .Industry, perhaps), click the + icon to add
additional key fields.
5. Either all or no keys must be passed to the data page at run-time. This allows one data page to
serve a dual purpose:
It can display all the items it contains (no keys passed).
It can return only items that match the keys.
6. The "Allow multiple pages per key" option lets the data page return multiple embedded pages from
a single instance to an auto-populated page list property. You might use this option when
preparing to display a list of all the products offered by a particular provider.
You can only use this option with an auto-populated page list property. If the option is not selected,
you can populate a page property.
In this example, a page property holds the information about the selected supplier:
Select "Refer to a data page" or "Copy data from a data page", then select the data page by name. In
the KEYS section, provide a property reference or a literal value for each data page key.
When the property is referenced, the system automatically loads the data page using the value that the
property sends as the key.
Return to top
The system holds the information about these customers in a node-level data page that it updates each
day, giving all customer service representatives (CSRs) access to the latest information. The list has
data on thousands of customers, and each CSR needs to get the right information quickly to deal with
customer interactions.
The solution is to automatically load the correct customer's information from the list structure data
page on the clipboard, based on the values of keys passed in.
Properties:
Data page:
Structure = List
Class = Data-Customer
Scope = Node
Edit Mode = Read Only
Parameter = LevelOfDetail
Select the "Access pages with user defined keys" option and specify .CustomerID as the value for
the data page key
What happens
1. The user or the system sets the values for .LevelOfDetail and .Customer.CustomerID.
2. The user or the system references an embedded property on .Customer. This triggers auto-
populating the customer data.
3. The system references the data page, passing the parameter values.
If an instance of the data page that corresponds to the parameter values exists on the
clipboard, and the data page's refresh strategy permits, the data page responds to the
reference with the existing instance.
Otherwise, it passes the parameters to the appropriate data source.
4. If the data source executes, it passes data to the response data transform, which maps data into
the instance of the data page.
5. The data page uses the key passed in to locate the correct page in the list.
6. The CopyCustomerSubset data transform, specified on the property, copies the required data from
the correct page in the list to the .Customer property.
If no data transform is specified, all data from the data page instance is copied into the property.
If the "Refer to a page property" option is selected, no data is copied into the property. Instead, the
system establishes a reference between the property and the correct embedded page in the data
page.
Return to top
A service returns products of a given showcase type, but does not group them by category. The
requirement is to let the user select both the showcase type and the product category.
Properties
Data page
What happens
1. The user or the system sets the values for .SelectedProductShowcaseType and
.SelectedProductCategoryID.
2. The user or the system references an embedded property on .Products. This triggers auto-
populating the product data.
3. The system passes parameters to the data page.
If there exists on the clipboard an instance of the data page that corresponds to the
parameter values passed in, and if the data page's refresh strategy permits it, the data page
returns data from the existing instance (jump to step 6).
Otherwise, it requests data from the data source so it can load a new instance of itself.
4. If there is a request to the data source, the data source uses the parameters passed to get and
provide the relevant data.
5. If the data source has provided data, the response data transform maps the data into the data
page.
6. The key identified on the data page is used to index into the correct pages in the list of data.
Because "Allow multiple pages per key" is selected, the data page returns all pages that match
.ProductCategoryID.
7. The system creates a reference between the .Products page list property and the products
identified in step 6 in D_ProductList.pxResults().
The requested list of products displays for the user.
The system does not copy anything to the case, since in this scenario you do not want to store
the list of products with the case.
Return to top
When a user adds a product to the shopping cart, the system needs to retrieve more detailed product
information than is displayed to the user from the correct database, and to store that information with
the order.
This scenario differs from the previous one in having a separate service for each supplier that returns a
list of all products for that supplier. The system uses parameters to load lists of products specific to a
supplier, and keyed data access to get specific product information from the correct supplier's product
list.
This scenario requires
Properties
Select the "Copy data from a data page" option (a data transform is required) and specify
D_ProductsList as the data page.
Set as parameters:
DetailLevel = .LevelOfDetail
SupplierID = .Products.SupplierID
Set .ProductID = .Products.ProductID as the key to the data page.
Select the "Retrieve each page separately" option. The data page either creates new instances of
itself for each unique .Products(n) based on .Products(n).ID, or copies different embedded pages
from the same data page instance.
Data page
Structure = List
Class = Data-Product
Scope = Node
Edit Mode = Read Only
Parameters = DetailLevel and ProductID
Select "Access pages with user defined keys" and select .ProductID as the page list key
The data source configuration requires a connector data source for each supplier.
What happens
1. When a user adds a product to their shopping cart, the system sets .LevelOfDetail to return full
product data for that item.
2. The user or the system references an embedded property on .Products that triggers
autopopulation of the product list.
3. The system passes parameters for each supplier to the data page.
If there exist on the clipboard instances of the data page that correspond to the parameter
values passed in, and if the data page's refresh strategy permits it, the data page returns
data from the existing instances.
Otherwise, the data page creates an instance of itself for each supplier based on the value of
the supplier parameter passed in.
Each page contains the appropriate level of detail about that provider's products.
The data page uses the SupplierID parameter value with the When conditions associated
with each data source to locate the data source that matches the supplier in question.
4. If a data source executes and returns data, the response data transform maps the data into the
instance of the data page.
5. For each page in the . Products page list property, the system uses the .LevelOfDetail and
.SupplierID parameters to locate or load the correct data page instance in memory, then uses
.ProductID to key into that instance and return the embedded page containing the correct product
information.
6. As the user adds products to their cart, the simple act of the section rendering the shopping cart
causes steps 1 through 5 to repeat for each product added.
Return to top
A data transform is a structured sequence of actions. When the system invokes the data transform, it
invokes each action in turn, following the sequence that is defined in the data transform's record form.
You can use a data transform to:
Prior to PRPC 6.2, data transforms were known as model rules, and were used only to set property
values. Data transforms provide more capabilities than model rules.
Data transform to copy a subset of data from the data page to the property
On the Edit Property form, the Optional Data Mapping field is displayed when you select Copy data from
a data page. Use this data transform to copy a subset of the data from the data page to the property. If
you do not specify a data transform, the system copies all the data from the data page to the property.
Edit property form
Return to top
For example, assume that you have a property of type TimeofDay that is formatted as a standard time
with hour, minute, and second without punctuation. To add 12 hours to the property:
1. Use a Set action to assign the property value to a second property that is of type TimeofDay.
2. On the right side of the expression, enter the property reference plus the fraction. In this example,
enter .5 to add 12 hours.
Troubleshooting
The articles in this section contain information to help you troubleshoot issues related to data
management.
However, if you made changes to your application or database outside of Designer Studio, or if you use
REST, HTTP, or SOAP integration, you might need to reconfigure some aspects of your application before
you port to a new database. For example, if you used the database vendor tools to create or modify the
database, some constructs that are specific to a particular database platform (for example, Oracle)
might not automatically translate into a similar construct on another platform (for example, Microsoft
SQL Server). If your application includes a database-specific construct or does not meet the
recommendations in this article, you might not be able to automatically deploy your application to other
database platforms.
Variable-length types
When using variable-length types, follow the recommendations in the following table. These value
ranges are for all supported databases and ensure that you can port the data to other databases.
VARCHAR/VARCHAR2 or
Decimal Integer
NVARCHAR/NVARCHAR2
Decimals: Decimal columns created or modified to have precision or scale outside the listed range
are not portable by using the Pega 7 Platform tools. For example, Microsoft SQL Server supports a
precision of up to 38 (38 total digits), but IBM DB2 for Linux, UNIX, and Windows supports only 31.
Attempting to migrate data from this Microsoft SQL Server column to IBM DB2 for Linux, UNIX, and
Windows would result in the loss of data and might fail.
Integers: As a best practice, use the decimal type with a scale of 0 in place of integer types.
VARCHAR: IBM DB2 databases interpret VARCHAR(n) as n bytes; other databases interpret it as n
characters. Some platforms allow you to specify the type of interpretation. For example, on Oracle
databases you can specify VARCHAR(n byte) or VARCHAR(n char). This distinction is particularly
important when dealing with Unicode data. Although ASCII uses one byte per character, UTF-8
uses up to 3 bytes per character, and UTF-16 uses 2 bytes per character.
Views
ANSI SQL views are fully supported. Materialized views are database-specific and therefore not portable.
Index limitations
The maximum index size across all vendors is 900 bytes, which means that the sum of the maximum
lengths of all columns in the index must be 900 bytes or less.
Any integration functionality not listed in this article or in Integration in your Pega Cloud environment
might cause portability problems.
Created Artifacts
Conditions when a data source cannot be updated in App Studio
Data and integration layers
Relationship between the response structure and the data page
Error messages
Locked ruleset processing
Supported authentication types
Created artifacts
When you create a data type that uses Pega as the system of record, the following artifacts are created:
When you create a data type that uses REST to connect to the system of record, the following artifacts
are created:
Data layer:
Data class for the new data type
Data type properties
Data type is added to the application
Integration layer:
Integration class
Connect REST rule
Authentication profile and authentication profile settings
Resource path settings
Response data transform
Response JSON data transform
Data page
When you replace a data source, the following artifacts are created or versioned:
The Data Type wizard creates all artifacts when you finish the wizard, except when you add a property
in the Data Mapping page. The new property is created immediately. If you cancel or close the wizard,
the property persists; it is not deleted.
When you replace or update a data type, previously generated artifacts are not deleted. New artifacts
are created.
The JSON data transform or connector is opened and saved in Dev Studio.
The normal response data transform is modified in Dev Studio so that it no longer references only
one updatable JSON data transform.
The data page is modified in Dev Studio in any of the following ways:
The data page no longer references a normal data transform as the response data transform.
The data page references a request data transform.
The connector parameters for the data source are obtained from the current parameter page
(Pass current parameter page is set to true).
The integration layer is set when you generate your application. You can modify the integration layer on
the Cases & Data tab of the application rule form.
When you replace or update a data source, you can replace any list or single object with any response,
but you always have to match cardinality when mapping fields.
Error messages
Error messages are displayed for invalid user inputs, invalid system-generated inputs, and environment
issues. The following conditions might cause an error to be displayed:
To help with troubleshooting, you can view the actual JSON request and response by clicking the
Information icon on the Data Mapping page.
Call information
Basic
NTLM