OFSAAI User Guide 8.1.2.0.0
OFSAAI User Guide 8.1.2.0.0
OFSAAI User Guide 8.1.2.0.0
Applications Infrastructure
User Guide
Release 8.1.2.0.0
April 2024
OFS Analytical Applications Infrastructure User Guide
Copyright © 2024 Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing
restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly
permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate,
broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any
form, or by any means. Reverse engineering, disassembly, or de-compilation of this software, unless
required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-
free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone
licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated
software, any programs installed on the hardware, and/or documentation, delivered to U.S.
Government end users are “commercial computer software” pursuant to the applicable Federal
Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication,
disclosure, modification, and adaptation of the programs, including any operating system, integrated
software, any programs installed on the hardware, and/or documentation, shall be subject to license
terms and license restrictions applicable to the programs. No other rights are granted to the U.S.
Government.
This software or hardware is developed for general use in a variety of information management
applications. It is not developed or intended for use in any inherently dangerous applications,
including applications that may create a risk of personal injury. If you use this software or hardware in
dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup,
redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim
any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC
trademarks are used under license and are trademarks or registered trademarks of SPARC
International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or
registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content,
products, and services from third parties. Oracle Corporation and its affiliates are not responsible for
and expressly disclaim all warranties of any kind with respect to third-party content, products, and
services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle
Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to
your access to or use of third-party content, products, or services, except as set forth in an applicable
agreement between you and Oracle.
For information on third party licenses, click here.
2.6 February 2024 1. Updated the maximum number of allowed characters in a PLC
code for CALL, CREATE and REPLACE functions. (35592826)
2.5 January 2024 1. The Session timeout value in Update General Details is updated
to more than 10 minutes. (Doc 36099483)
2. Added Rest API for Object Migration (36150017)
3. Updated Erwin version – 12.5 (36167048)
4. Inclusion of SAML_Entity in System Configuration (36198751)
2.3 February 2023 Updated restricted characters for creating Attributes and Members in
Adding Attribute Definition and Adding Member Definition
(34908950).
2.2 February 2023 Updated relevant information regarding Model Upload-Script Path
and Json Comparison (35060351) in sections -Sequence of Scripts
Execution and Model Upload Using OFSAA Data Model Descriptor
(JSON) File.
2.1 Jan 2023 Included procedure to attach supporting documents during Data
entry in Data Maintenance Interface.
Updated steps for 2 factor authentication for approving records in
Data Maintenance Interface (34751606)
2.0 December 2022 Updated Setting Preferred Language section (Doc 34787889).
1.9 November 2022 Updated alert information related to incorrect Username, password,
FTP server and port in Database Server and Application Server
sections (34826719).
1.8 October 2022 Updated record selection details in Editing Form Details and Deleting
Form Details (34598676)
1.7 August 2022 Updated Erwin versions supported by OFSAAI in Upload Business
Model section for Doc 34511386
1.6 July 2022 Added a note (JIT Provisioning) in the Update General Details Section
for Doc 34122972.
1.5 July 2022 Added the Configuring Token-Based RFI Section (Doc 34250090).
1.3 March 2022 Added the Command Line Utility for Partition-Based Derived Entities
Section (Doc 33482484).
1.22 March 2022 The following sections are updated (Doc 33929561):
• Creating User Status Report
• Creating User Access Report
1.2 January 2022 Added the Command-line Utility to Bulk Import User Groups to IDCS
Section (Doc 33410774).
1.1 December 2021 • Updated the General Configurations if Big Data Processing
License is Enabled Section (33453948).
• Updated the Data Maintenance Interface Section (Doc 33188698).
• Added the Command Line Utility for Resave, Refresh and Delete
Partitions Section (Doc 33482484).
• Updated the Monitoring Batch Section (Doc 33598325).
1.0 Created The Command-Line Utility for SQL Modeler to JSON (ODM) Section is
November 2021 added for the enhancement in Release 8.1.2.0.0.
3.1.3 Model Upload Using OFSAA Data Model Descriptor (JSON) File .....................................................................46
3.2 OFSAA Data Model Extensions through the SQL Data Modeler ............................................................................50
3.2.1 Customization Process ...............................................................................................................................................50
4.4.3 Versioning and Make Latest Feature of Data Mapping ..................................................................................... 110
4.7.2 General Configurations if Big Data Processing License is not enabled......................................................... 128
4.11.3 RAC.................................................................................................................................................................................173
7.8.3 Data Entry for Forms Created using the Auto-Approve Option .................................................................... 349
8.3 Process................................................................................................................................................................................376
8.3.1 Create Process ............................................................................................................................................................378
8.3.2 View Process Definition ........................................................................................................................................... 386
8.7.2 How Run Rule Framework is used in LRM Application ...................................................................................... 413
10 Questionnaire..................................................................................................................... 465
10.5.6 Wrap and Unwrap Questions from the Library .................................................................................................. 483
10.6 Define the Questionnaires ............................................................................................................................................ 483
10.6.1 Create the Questionnaire in the Library .............................................................................................................. 484
10.6.6 Wrap and Unwrap the Questionnaire from the Library ................................................................................... 494
11.1.13 View OFSAA Product Licenses After Installation of Application Pack ......................................................... 547
12 Reports................................................................................................................................ 582
13.1 Access Object Administration and Utilities based on Information Domain ......................................................592
13.2 Object Security Concept in OFSAAI .............................................................................................................................593
14.2.2 Command Line Utility for Fire Run Service\ Manage Run Execution ........................................................... 676
14.8 Command Line Utility for Resaving Derived Entities and Essbase Cubes ........................................................ 689
14.8.1 Command Line Utility for Resave, Refresh and Delete Partitions .................................................................. 691
14.8.2 Command Line Utility for Partition-Based Derived Entities ............................................................................ 691
16.3.11 ESIC Operations Using Command Line Parameters and Job Types .............................................................. 725
18.1 OFS Analytical Applications Infrastructure User Groups and Entitlements....................................................... 732
18.2 OFS Analytical Applications Infrastructure User Roles ........................................................................................... 732
18.3 OFS Analytical Applications Infrastructure Functions ............................................................................................743
18.4 OFS Analytical Applications Infrastructure Group - Role Mapping .................................................................... 764
1 Preface
OFSAAI provides the framework for building, running, and managing applications along with out of the
box support for various Deployment Models, Compliance to Technology standards, and supporting a host
of OS, Middleware, Database, and Integration with enterprise standard infrastructure.
The information contained in this document is intended to give you an exposure and an understanding of
the features in Oracle Financial Services Analytical Applications Infrastructure.
• What is New in this Release of OFSAAAI Application Pack
• About this Manual
• Audience
• Recommended Skills
• Recommended Environment
• Prerequisites
• Conventions and Acronyms
Feature Description
Filter Download Filter Data, Bulk Edit, and Upload using the Utility to download
and upload Filter Definitions.
Link Analysis Performance Optimization to reduce the time taken to load graphs.
Model Upload The Command-Line Utility for SQL Modeler to JSON (ODM) to transform
SQL Modeler to ODM (JSON).
Object Migration The reverse population of Migrated Hierarchies using Offline Migration.
Operations Menu Privileged-based access to the functionalities in all of the Batch Screens.
Feature Description
Refresh Rate of Monitoring Change the default setting to 10 seconds to refresh the Batch Monitor
Batch and Batch Execution Windows.
Security Management • Utility to generate CSV file containing User, Groups, and User-Group
System Mapping in User Administrator.
• Following enhancements in User Maintenance:
When you create a User, the Enable User check box is
selected by default.
When you create a User, Start Date-Time displays the
Current Date-Time by default and the End Date displays 31-
December-2050 by default.
Lock statuses such as Enable, Disable and Delete with
Authorization required for status updates.
Authorization is not required to view the Enable User
Window in the User Activity Report.
For more details, see the Oracle Financial Services Advanced Analytical Applications Infrastructure Release
8.1.2.0.0 Readme.
1.3 Audience
This guide is intended for:
• Business Analysts who are instrumental in solution designing and creation of statistical models
using historical data.
• System Administrators (SA) who are instrumental in maintaining and executing batches, making the
Infrastructure Application secure and operational, and configuring the users and security of
Infrastructure.
1.6 Prerequisites
• Successful installation of Infrastructure and related software.
• Good understanding of business needs and administration responsibilities.
• In-depth working knowledge of business statistics.
Conventions Description
Conventions Description
AW Analytical Workspace
BA Business Analysts
BI Business Intelligence
BP Business Processor
CF Cash Flow
DQ Data Quality
ES External Scheduler
IP Internet Protocol
Conventions Description
NE Non Editable
SA System Administrator
SID System ID
TBD To be Deleted
TP Transfer Pricing
2 OFSAAI - An Overview
Oracle Financial Services Analytical Applications Infrastructure (OFSAAI) is a general-purpose Analytics
Applications infrastructure that provides the tooling platform necessary to rapidly configure and develop
analytic applications for the financial services domain. It is built with Open-Systems Compliant
architecture providing interfaces to support business definitions at various levels of granularity.
Applications are built using OFSAAI by assembling business definitions or business metadata starting
from data-model to lower grain objects like Dimensions, Metrics, Security Maps, and User Profile to higher
order objects like Rules, Models, and Analytic Query Templates which are assembled using the lower grain
ones. In addition to application definition tools, it provides the entire gamut of services required for
Application Management including Security Service, Workflow Service, Metadata Management,
Operations, Life-cycle Management, public API’s and Web Services that are exposed to extend and enrich
the tooling capabilities within the applications.
Oracle Financial Services Analytical Applications Infrastructure is the complete end-to-end Business
Intelligence solution that is easily accessible via your desktop. A single interface lets you tap your
company’s vast store of operational data to track and respond to business trends. It also facilitates
analysis of the processed data. Using OFSAAI you can query and analyze data that is complete, correct,
and consistently stored at a single place. It has the prowess to filter data that you are viewing and using for
analysis.
It allows you to personalize information access to the users based on their role within the organization. It
also provides a complete view of your enterprise along with the following benefits:
• Track enterprise performance across information data store.
• Use one interface to access all enterprise databases.
• Create consistent business dimensions and measures across business applications.
• Automate the creation of coordinated data marts.
• Use your own business language to get fast and accurate answers from all your databases.
• Deploy an open XML and web-based solution against all major relational or multi-dimensional
databases on Microsoft Windows and UNIX servers.
This chapter provides an overview of Infrastructure, its components, and explains how these components
are organized in the Splash window with the user login process.
All components are encapsulated within a common Security and Operational framework as shown in the
following figure:
Infrastructure also supports many business analytical solution(s) like Operational Risk, PFT, and Basel,
which are licensed separately to the organization. This manual provides an overview of only the
technological components.
For a detailed overview of OFSAAI modules, see Modules in OFSAAI section.
The SA will provide you with a link through which you can access Oracle Financial Services Analytical
Applications. You can access the login window through your web-browser using the URL http(s): <IP
Address of the Web Server > :<servlet port>/<context name>/login.jsp.
You can also login to the application with the host name instead of the IP address.
You can select the required language from the Language drop-down list. The language options displayed
in the drop-down list are based on the language packs installed for the OFSAA infrastructure. Based on
the selected Language, the appropriate language login window is displayed.
Enter the User ID and Password provided by the System Administrator and click Login. You will be
prompted to change your password on your first login. For details on how to change password, see the
Changing Password section.
In case the OFSAA setup has been configured for OFSAA native Security Management System (SMS)
Authentication, the password to be entered will be as per the password restrictions set in the OFSAA SMS
repository.
1. Enter your User ID and Password (as in LDAP store) in the respective fields.
2. Select the appropriate LDAP Server from the drop-down list, against which you want to get
authenticated. This is optional. If you do not select any server, you will be authenticated against the
appropriate LDAP server.
• If LDAP Authentication & SMS Authorization is configured as Authentication Type from the
Configuration window and the SMS Auth Only checkbox is selected for the user in the User
Maintenance window.
• If SSO Authentication & SMS Authorization is configured as Authentication Type from the
Configuration window and the SMS Auth Only checkbox is selected for the user in the User
Maintenance window.
In the Change Password window, enter a new password, confirm it, and click OK to view the OFSAA Login
window. Refer to the following guidelines for Password Creation:
• Passwords are displayed as asterisks (stars) while you enter. This is to ensure that the password is
not revealed to other users.
• Ensure that the entered password is at least six characters long.
• The password must be alphanumeric with a combination of numbers and characters.
• The password should not contain spaces.
• Passwords are case sensitive and ensure that the Caps Lock is not turned ON.
• By default, the currently used password is checked for validity if password history is not set.
• The new password should be different from previously used passwords based on the password
history, which can be configured.
For more information, see the Configuration section in System Configuration chapter.
If you encounter any of the following problems, contact the System Administrator:
• Your user ID and password are not recognized.
• Your user ID is locked after three consecutive unsuccessful attempts.
OFSAA Landing Page shows the available Applications as tiles, for which a user has access to. Clicking the
respective Application Tile launches that particular Application. You can change the Landing Page based
on your preference.
For more information, see the Preferences section.
2.6.1 Header
Figure 6: OFSSA Header
Hamburger/Navigation Menu Icon- This icon is used to trigger the Application Navigation Drawer.
Application Icon- This icon is used to show the available Applications installed in your environment at
any time.
Administration Icon- This icon is used to go to the Administration window. The Administration window
displays modules like System Configuration, Identity Management, Database Details, manage OFSAA
Product Licenses, Create New Application, Information Domain, Translation Tools, and process Modelling
Framework as Tiles.
Reports Icon- This icon is used to launch various User Reports such as user Status Report, User Attribute
Report, User Admin Activity Report, User Access Report, and Audit Trial Report.
Language Menu- It displays the language you selected in the OFSAA Login Screen. The language options
displayed in the Language Menu are based on the language packs installed in your OFSAA instance. Using
this menu, you can change the language at any point of time.
User Menu- Clicking this icon displays the following menu:
Last Login Details - This displays the last login details as shown:
Here the navigation items appear as a list. The First Level menu shows the installed applications. Clicking
an application displays the second-level menu with the application name and Common tasks menu. The
arrangement of the menu depends on your installed application.
Clicking an item in the menu displays the next level sub menu and so on. For example, to display Data
Sources, click Financial Services Enterprise Modeling>Data Management>Data Management
Framework>Data Management Tools>Data Sources.
Click Hierarchical Menu to display the navigation path of the current sub menu as shown:
The RHS Content Area shows the Summary Page of Data Sources. Click anywhere in the Content Area to
hide the Navigation Drawer. To launch it back, click the Hamburger icon .
Click Home to display the OFSAA Landing Screen.
• Questionnaire module is an assessment tool, which presents a set of questions to users, and
collects the answers for analysis and conclusion. It can be interfaced or plugged into OFSAA
application packs.
• System Configuration & Identity Management module facilitates System Administrators to
provide security and operational framework required for Infrastructure. Administration window has
a Tiles menu with Tiles like System Configuration, Identity Management, Database Details, Manage
OFSAA Product Licenses, Create New Application, Information Domain, Translation Tools and
Process Modelling Framework.
• Object Administration facilitates System Administrators to define the security framework with the
capacity to restrict access to the data and metadata in the warehouse, based on a flexible, fine-
grained access control mechanism. These activities are mainly done at the initial stage and then on
a need basis. It includes sections like Object Security, Object Migration, and Utilities (consisting of
Metadata Difference, Metadata Authorization, Save Metadata, Write-Protected Batch, Component
Registration, Transfer Document Ownership, and Patch Information).
<RollingFile name="UMMAPPENDER"
fileName="/scratch/ofsaaweb/weblogic/user_projects/domains/cdb/application
s/cdb.ear/cdb.war/logs/UMMService.log"
filePattern="/scratch/ofsaaweb/weblogic/user_projects/domains/cdb/applicat
ions/cdb.ear/cdb.war/logs/UMMService-%i.log" >
<PatternLayout>
<Pattern> [%d{dd-MM-yy HH:mm:ss,SSS zzz aa}{GMT}] [%-5level] [WEB] %m%n
</Pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="5000 KB" />
</Policies>
<DefaultRolloverStrategy max="5"> <!-- number of backup files -->
</DefaultRolloverStrategy>
</RollingFile>
3. To change the log file size, modify the value set for SizeBasedTriggeringPolicy size.
4. To change the number of backup files to be retained, modify the value set for
DefaultRolloverStrategy max.
Field Description
New You can upload a new business model only when you are uploading a model
for the first time for the selected Information Domain. This option is not
available for subsequent model uploads.
JSON / erwin and DB Catalog options are available for New Model Upload.
Field Description
Rebuild You can re-build a model on the existing model in the database. The
existing model is replaced with the current model details. This option is
available with the subsequent model uploads and the current model
uploaded is considered as the latest model for the selected Information
Domain.
Any incremental changes are considered as a ‘Rebuild’ if DB Catalog is
selected as the Model Upload option.
Sliced You can quickly upload the Sliced model with only the incremental changes,
without merging the tables or columns of an existing model. In a Sliced
Model Upload you can incrementally add new tables, add/update columns
in the existing tables, and add/update primary/foreign keys in the existing
model. You can also drop a column or primary/foreign key. However,
dropping a table is not supported. This option is available only with the
subsequent model uploads.
• Sliced Model Upload is faster compared to other upload types as it
optimizes the system memory usage and reduces the file size of
erwin.xml.
• Sliced is not supported if DB Catalog is selected for the Model Upload
option.
In sliced model upload, if the version of the Base model existing in the
environment is higher than the Sliced model getting uploaded, then the
columns (which are not present in the Sliced model) are not dropped. For
more information, see the Model Versioning section.
Sliced Model Upload compares the existing entity JSON available in the
aai_dmm metadata table. Based on the checksum values:
• If the checksum matches, it will ignore the JSON.
• If the checksum values do not match, then the model upload is carried
out and overwrites the existing JSON.
The Business Model Upload Summary window facilitates to upload the required Business Model and
displays the summary of previously uploaded Business Models with their Name, Type (New/
Incremental/Rebuild/Sliced), Enable NoValidate status (Y or N), Result of upload
(Success/Failed/Running), Start Date, End Date, Log File path, and Status. You can click the View Log link
in the Status column corresponding to the required model to view the Model Upload details in the View
Log Details window.
You can also search for a specific model based on the Name or Type (New / Incremental / Rebuild /
Sliced) existing within the system.
• OFSAAI supports Erwin 9.8, 2018 R1, 2019 R1, 2020 R1, 2020 R2, 2021 R1, and 12.1 generated XMLs in
Model Upload process.
• Time to time, Erwin Withdraws support for lower versions. However, one can open the prior version
data models using the latest versions of Erwin modeler. You can save it as a repository file with the
OFSAA supported versions.
• By default, OFSAAI supports Data Model up to 2 GB. To configure the permissible size specific to
requirements, see the Frequently Asked Questions section in OFS AAAI Installation Guide.
• Ensure that the XML file to be uploaded is saved in “All Fusion Repository Format”.
• Datatypes of TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE are
supported for Model Upload. However, the processing of these datatypes is not supported in
OFSAAI.
To upload a Business Model:
1. From the Business Model Upload Summary window, click Add. The Business Model Upload
window is displayed.
2. (Mandatory) Enter a Name for the model being uploaded. Ensure that the name specified does not
exceed more than 30 characters in length and does not have special characters such as #, %, &, ‘,
and “.
3. Select the required Upload Option. The options are JSON / erwin XML, DB Catalog, and Data
Model Descriptor. For more information on each option, see the corresponding sections:
Model Upload Using JSON / erwin XML
Model Upload Using DB Catalog
Model Upload Using OFSAA Data Model Descriptor
NOTE For subsequent model uploads, you must select the same
Upload Option as used in the first model upload. That is, if you
selected erwin as the Upload Option for the first-time model
upload, then the subsequent model uploads must be done
using the erwin option only.
4. Click Upload Model. The model upload execution is triggered and you are re-directed to the Model
Upload Summary window with the upload details in the summary grid. The “Status” of current
upload is indicated as Running and after the process is completed, the status is updated as either
Success or Failed depending on the execution.
NOTE Index creations are not supported from Apache Hive 3.0 version and
higher. To skip the Index creation, You can update
APACHE_HIVE_VERSION in the Configuration table and restart
FICServer.
NOTE To display the current upload status, you must have a connection pool
established to access data from the database. For more information on
connection pooling, see OFS AAAI Installation Guide.
You can click View Log to view the model upload details and also Download Log File to a location for
reference.
NOTE Even if the object registration fails, the Model Upload process will be
successful. In such cases, you must manually do the object registration by
running the Command line utility for Object Registration, since object
registration is mandatory for subsequent model upload to be successful.
NOTE The Model Upload process is stopped if any errors are encountered. It
does not proceed until completion to capture all the errors.
<jsonfile>TBL_ACC~80000.json</jsonfile>
<jsonfile>TBL_ACC_CLASS~80000.json</jsonfile>
<jsonfile>TBL_LOAN_APP~80000.json</jsonfile>
<jsonfile>TBL_CUST~80000.json</jsonfile>
<jsonfile>TBL_LOAN~80000.json</jsonfile>
</jsonfiles>
</jsonupload>
You should upload JSON or XML file (erwin or Database) by hosting it on the server and customize the
update process while uploading a Business Model.
Figure 13: Business Model Upload window for JSON / erwin XML
To perform Model Upload using the JSON / erwin option, follow these steps:
1. In the Business Model Upload window, select Upload Options as JSON / erwin XML.
2. Select the Upload Mode from the drop-down list. You can select New only for the first model
upload. For subsequent uploads, you can select Incremental, Rebuild, or Sliced upload mode. For
more information, see Model Upload modes. For the Sliced Model Upload, you can use SQL Data
Modeler. For more information, see OFSAA Data Model Extensions through the SQL Data Modeler.
3. Select the Object Registration Mode from the drop-down list as Full Object Registration or
Incremental Object Registration. You can select Incremental Object Registration for the Upload
Mode as Incremental and Sliced. It is recommended to select incremental only if the changes are
minimal.
4. Select the Upload File Details pane, and select the upload file type from the following:
JSON: Select the ODM File for Upload from the drop-down list.
XML: Select the erwin XML or Database XML file for upload from the drop-down list.
5. The list displays the ODM, erwin, or Database files that reside in the default server path (that is,
ftpshare (Application layer/<infodom>/erwin/erwinXML).
NOTE The erwin XML file name should have only alphanumeric
characters and underscore.
NOTE Only the table scripts are created and they must be updated
manually. If you choose this option for the first time and later
perform an Incremental / Sliced / Complete Model Re-build,
you must manually synchronize the schema with the Database
Schema.
b. Select Yes for the Generate DDL Execution Logs option if you want execution audit
information such as execution start time, end time, and status of each SQL statement Run as
part of the Model Upload process. The execution log file is available under the
ftpshare/<INFODOM>/executionlogs folder.
c. Select Yes for the Refresh Session Parameters option to use Database session parameters
during the model upload process.
For more information, see Configuring Session Parameters section.
d. Select Yes to directly update the Alter constraints in NOVALIDATE State. During the
Incremental or Sliced Model Upload, alterations to the constraints consume a lot of time as the
constraints have to be validated.
If you select Yes, an option to alter the constraints in the NOVALIDATE state is enabled
and it will not check the existing data for the integrity constraint violation. It is useful when
the existing data is clean. Therefore, NOVALIDATE potentially reduces the additional
overhead of the constraint validation and enhances the performance.
By default, the option selected is No and the option to alter the constraints is not enabled.
It checks the existing data for the integrity constraint violation.
NOTE erwin is the primary and boot-strap mode to register the Data-
Model with the OFSAA ecosystem. The DB Catalog option does
not take care of the logical artifacts. Hence, do not consider DB
Catalog as a replacement for erwin.
2. Select the Upload Mode from the drop-down list. You can select New only for the first upload. For
subsequent uploads, you can select Rebuild.
For more information, see the Model Upload modes section.
If the table details are specified in the
$OFSAA_HOME/conf/dmm/Input_DBCatalog_Objects.properties file, then the
application selects the specified tables for DB Catalog. The Entity Filters are not available
selection if the table details are specified in the properties file.
If the table are not specified, then the application will upload all the tables from the database.
3. Specify the Entity Filters by entering details in the Starts With, Contains, and Ends With fields.
The Filters are patterns for entity names in the Database and can restrict the Database Model
generation to a specific set of entities. The Database Model is generated even if one of the specified
filter conditions matches.
4. You can also specify multiple conditions for a single filter type using comma-separated values. For
example, tables starting with TB and TM can be specified as “TB, TM”.
3.1.3 Model Upload Using OFSAA Data Model Descriptor (JSON) File
This feature allows you to resume the Data Model upload from the logical Data Model, in the form of
OFSAA Data Model Descriptor File (JSON) that is generated in the base environment. This helps in
speeding up the Model Upload process by skipping the XSL transformation in the primary environment.
This feature can be used if the same model in the development environment should be uploaded to
multiple OFSAA instances in the production environment. In such scenarios, you can copy the model
definition (JSON) files and scripts to the target environment and run the command line utility
CopyUpload.sh, to integrate those files in the target environment. You can choose to resume the model
upload process from script generation or script execution.
Following are the steps involved in the model upload using OFSAA Data Model Descriptor file:
1. Copy the required files from source to target environment based on the start point from where you
want to resume the model upload process.
2. Execute the CopyUpload utility.
3. Perform Model Upload.
/ftpshare/<INFODOM>/json/fipjson_-1/*.json
Script Execution DB Scripts - /ftpshare/<INFODOM>/json/scripts and
/ftpshare/<INFODOM>/scripts folders
Figure 15: Business Model Upload window for Data Model Descriptor
2. Select the Object Registration Mode from the drop-down list as Full Object Registration or
Incremental Object Registration. It is recommended to select incremental only if the changes are
minimal.
You have a third party tool or ETL tool to manage the schema updates.
Database consistency and schema updates are maintained manually by the Database
Administrator.
NOTE Only the table scripts are created and they must be updated
manually. If you choose this option for the first time and later
perform an Incremental / Sliced / Complete Model Re-build,
you must manually synchronize the schema with the database
schema.
b. Select Yes for the Generate DDL execution logs option if you want execution audit information
such as execution start time, end time, and status of each SQL statements run as part of the
Model Upload process. The execution log file is available under the
ftpshare/<INFODOM>/Erwin/executionlogs folder.
c. Select Yes for the Refresh Session Parameters option to use Database session parameters
during the model upload process. For more information, see the Configuring Session
Parameters section.
d. Select Yes to directly update the Alter constraints in NOVALIDATE State. During the
Incremental or Sliced Model Upload, alterations to the constraints consume a lot of time as the
constraints have to be validated.
If you select Yes, an option to alter the constraints in the NOVALIDATE state is enabled
and it will not check the existing data for the integrity constraint violation. It is useful when
the existing data is clean. So, NOVALIDATE potentially reduces the additional overhead of
the constraint validation and enhances the performance.
By default, the option selected is No and the Option to alter the constraints is not enabled.
It checks the existing data for the integrity constraint violation.
3.1.3.4 Rollback
Rollback of the Model Upload happens to the state just before the CopyUpload.sh process. The migrated
files are preserved under the ftpshare/<INFODOM>/archive path.
1. Automatic Rollback occurs in the following cases:
a. When your start point is script generation:
Creation of script failed
Execution of script failed
b. When your start point is script execution:
The execution of scripts failed.
2. In case of failure, for troubleshooting, check the following log files:
$FIC_HOME/ficapp/common/FICServer/bin/nohup.out
$FIC_HOME/ficapp/common/FICServer/logs/ETLService.log
$FIC_HOME/ficapp/common/FICServer/logs/SMSService.log
$FIC_HOME/ficapp/common/FICServer/logs/UMMService.log
ftpshare/logs/
ftpshare/executelogs
Contact Oracle Support services for further information.
3. You can trigger the Model Upload again, if required, using the files available in the path:
ftpshare/archive/<INFODOM>. It is not required to execute the CopyUpload utility again.
• Support is extended for column length change and addition of new columns. Ensure that the
existing column, when represented in SQL Modeler, must be intact with the base model definition
for information such as UDPs, domains, and other logical information. Otherwise, it can create
inconsistencies in the populated information of the OFSAA metadata repository.
NOTE Oracle recommends that you import only the altered columns into the
SQL Modeler. If you import all the columns (altered and unaltered), the
changes from the previous upload will be overwritten.
However, if you choose to import all the columns and avoid overwriting
the existing changes, select the blank value (do not select BYTE or CHAR)
from the Units drop-down list in the Column Properties tab in the SQL
Modeler.
• As model level UDPs are not supported by SQL Modeler, Model UDP - VERSION is expected to be
added at the table level. Ensure that the version for an existing table undergoing customization is
equal or higher than that of the previous model. If it is missing for any table, the default value is
80000. Therefore, there are possibilities to ignore customizations.
3.2.1.2.1 Limitations
• During the upgrade, if the out of the box model comes with a Primary Key (PK) change that is
referenced by a custom table, the custom table is expected to be modified accordingly to hold the
Foreign Key (FK) change prior to the OOB upload.
For instance, if the parent table PK is modified to have an additional column, the following steps
must be performed to achieve the latest changes in the out-of-the-box model.
c. The child table (added as an extension) is expected to be altered to have the additional column
via the SQL Modeler mode of upload.
d. Proceed with the upgrade of the OOB Model upload.
From the Business Model Upload window, perform the following steps:
1. Enter a Name for the model being uploaded.
2. Select Sliced from the Upload Mode drop-down list.
3. Select SQL Modeler as the Upload Options.
This option is displayed only if you select Sliced as Upload Mode.
4. Select the XML file for upload from the File Name drop-down list.
The XML file is the one you created as explained in Steps for Creating XML File: section.
5. Click Upload Model.
NOTE • The Model Upload command line utility does not support SQL
Modeler as of now.
• You can only choose upload type for SQL Modeler Upload as
XML. JSON(ODM) files are not supported.
part of the OFSAAI Model Upload process in the exact sequence, in order to make the Infodom Schema to
be consistent with the JSON files persisted in the database.
The sequence is explained in the following table.
Roll back scripts must be executed in case of failures in the reverse order. That is, if the 4th step has
caused roll back, then roll back scripts from 4 to 1 must be executed in sequence. Rollback scripts are
available in the same path with the file name prefixed with r_.
for a particular partition to be considered for any executions. Data into this column can be populated
manually or with the help of any OFSAAI table data load options.
Hive supports static and dynamic partitions. Values for static partition are known in the query whereas
dynamic partition values are known at the execution time. If V_PARTITION_VALUE is null in
REV_TAB_PARTITIONS, the table is considered as dynamic partitioned. AAI executions run on static and
dynamic partitions.
The Data Management Tools module is equipped with a set of automated tools and a tested data
integration methodologies that allows you to position the advanced N-tier web-based architecture and
integrate the enterprise data sources from the mainframe to the desktop.
In Data Management Tools, you can standardize and integrate the various source system data into a
single standard format for data analysis. You can also populate the warehouse in a defined period using
the ETL process for data extraction, transformation, and loading.
Following are the prerequisites while working with Data Management Tools:
• You can transform data using the options - Before load, While load or After Load.
• For source system information, filenames can be either fixed or delimited in length.
• The source types that can be loaded into the system are RDBMS and Flat-Files. For an RDBMS
source type, ensure that the appropriate drivers are installed.
• Ensure that you are aware of the process flow before you start with the extraction, transformation,
and loading process.
As part of the 8.0.6.0.0 release, Data Management Tools User Interface is re-organized and OJET/ALTA
theme is adapted for better usability. All metadata in DMT is now persisted in Database instead of XML
files.
NOTE HDFS and WebLog based options are displayed only if the Big
Data Processing license is enabled.
DMT Metadata are stored in the Database Tables, instead of the earlier approach of storing in XML and it
is Infodom specific.
Since the source model generation is done for Flat file based Data Sources while defining a Data Source,
there is no separate Data File Mapping window for creating mapping definition. In other words, F2T and
F2H can be defined from the Data Mapping window itself.
If the Data Source is an OFSAA Infodom and model upload has already been done for the Infodom, there
is no need to create another Data Source pointing to this Infodom. The Infodom can directly be used in the
Data Mapping definition as a source. In addition, Dataset filters can be applied to this Infodom to get a
further subset of Entities.
The roles mapped to Data Sources are as follows:
• SRCACCESS
• SRCREAD
• SRCWRITE
• SRCPHANTOM
• SRCAUTH
• SRCADV
The Data Sources Summary window displays the list of pre-defined Data Sources with details such as
Code, Name, Source Type, Upload Type, Created By, Creation Date, Version, and Active. You can add,
view, modify, copy, authorize, delete, or purge Data Source definitions. You can make any version of a
Data Source definition as the latest. For more information, see Versioning and Make Latest Feature.
For sorting the fields, mouse-over at the end of the Column heading and click to sort in the ascending
order or click to sort the fields in the descending order.
You can search for a Data Source based on Code, Name, Source Type, and Record Status (Active, Inactive,
or Deleted). In the Search and Filter pane, enter the details of the Data source you want to search in the
respective fields and then click .
The ID will be automatically generated once you create a data source. The Folder field is not
enabled.
2. Enter a distinct Code to identify the Data Source. Ensure that the code is alphanumeric with a
maximum of 50 characters in length and there are no special characters except underscore “_”.
Field Description
If Type is selected as Local: Specify the Source Date Format to be used as the default date format
for source data extraction and mapping.
If Type is selected as Server Name: Enter the Server Name or IP address where the Data
Remote: Source exists.
Server Port: Enter the active server port number that contains the flat
files.
User ID: Enter the FTP User ID required to connect to the server.
Password: Enter the FTP user password required to connect to the
server.
FTP Share: Enter the ASCII files location for loading if it is located in the
staging area other than the default staging area of Infrastructure
Database Server.
FTP Drive: Enter the FTP server path. In case of Unix Servers, the home
directory path is taken as default.
Source Date Format: Enter the Source Date Format that will be used as
the default date format for source data extraction and mapping. The
date format you enter is validated against the supported date formats of
the database to which the Config Schema points.
5. From the Generate Model pane, click Select if the File Type is Delimited or Fixed. This allows you
to select the table whose structure is similar to the structure of your source. Using this option, you
can generate a model based on the selected table. The Source Entities window is displayed.
Select the required Entity and click to move it to the Selected Values pane.
You can search for an entity by giving its name in the text field and click . Click to
reset the search field.
c. Click OK. All the columns in the selected Entity will be displayed in the Generate Model pane.
The available columns are Source Table, Table Logical Name, Source Column, Column Logical
Name, Data Type, Field Order, Start Position, Length, and Logical Data Type.
Table 8: Fields in the Data Source for WebLogs and their Description
Field Description
If Type is selected as Local: Specify the Source Date Format to be used as the default date format
for source data extraction and mapping.
Field Description
If Type is selected as Server Name: Enter the Server Name or IP address where the Data
Remote: Source exists.
Server Port: Enter the active server port number that contains the flat
files.
User ID: Enter the FTP User ID required to connect to the server.
Password: Enter the FTP user password required to connect to the
server.
FTP Share: Enter the ASCII files location for loading, if it is located in the
staging area other than the default staging area of Infrastructure
Database Server.
FTP Drive: Enter the FTP server path. In case of Unix Servers, the home
directory path is taken as default.
Source Date Format: Enter the Source Date Format that will be used as
the default date format for source data extraction and mapping. The
date format you enter is validated against the supported date formats of
the database to which the Config Schema points.
Source Model Generation (SMG) for Weblog files is done by reverse-generation of the Data Model from
WebLog files. That is, you can choose a sample file from the source base folder and the SMG process tries
to fit the data file to a known log type or to a custom log model. It validates the data model against a few
records from the file and publishes them to you. If you find the model satisfactory, you can save the
model. Otherwise, you can edit the model and re-validate it.
When source is saved from the UI, SMG logs will be available in the <web local
path>/<infodom>/dmt/source/<source code>/log folder. When source is saved from utilities (any
non j2ee container), logs will be written to <app ftpshare>/<infodom>/dmt/source/<source
code>/log folder.
To generate Source Model for WebLog:
1. From the Generate Model pane in the Data Sources window, click Derive.
The Source Model Generation window is displayed.
All the files/folders from the base folder of the WebLog source are listed in the File Browser pane.
You can search for a particular file by entering the filename in the Search field. All special
characters except +, \, #, ~, %, &, *, ? , ( ,), [, ],\\ and ,. The selected file will be used to generate the
Data Model for the whole of WebLog source.
2. Select the file from the File Browser pane.
The File Format field displays the selected File format from the Generate pane.
3. Enter the number of records (n) to be fetched from the selected file for the preview. By default, 5 is
displayed. These records will be finally used to validate the Data Model.
4. Click Preview.
You can view the “n” number of records displayed in the Preview pane.
5. Select a record from the Sample Data based on which you want to generate a Data Model. By
default, the last record is selected.
6. Select the appropriate Logger Type from the drop-down list. The available options are:
APACHE - Sample - Select this if you know the log format of your data is in Apache log format.
MICROSOFT-IIS - Sample - Select this if you know the log format of your data is in Microsoft
log format.
Custom- Select this option if you are not sure about the log format. It will intelligently try to fit
data to a standard log format or generate a custom log model. Select the Delimited checkbox if
the data is separated by a delimiter and enter it in the Field Delimiter field.
NOTE Standard logger types and their details are seeded in the
AAI_DMT_WEBLOG_TYPES table. By default, details for Apache
and Microsoft-IIS logs are pre-populated. You can add other
logger methods to the table to make them visible in the UI. For
more information, see the Logger Type Seeded Table section in
OFSAAI Administration Guide.
7. Click Generate Data Model. If the model generation is successful, you can view the Data Model
Preview pane. Model is generated based on the selected record in the Preview pane.
If you have selected standard Logger Type, standard column names are displayed. If Custom is
selected, column names are set as fld_0, fld_1, fld_2, and so on.
The supported Data Types are String and Int.
If Custom is selected as Logger Type and the Delimited checkbox is selected, the Regex field
will be non-editable and the Input Regex field will not be displayed.
The Data Model is based on the generated Input Regex value. For the standard logger types,
this value is hard-coded. The regex is fuzzy-logically computed in the case of Custom Logger
Type.
For more information on tweaking the Data Model, see the Model Customization section.
8. Click Validate to validate the “n” number of records against the model.
If there are any records that do not conform to the model, an alert with the number of invalid
records is displayed. You can scroll the grid to check the erroneous data marked in red or optionally
click the Invalid Data button in the Data Validation grid.
In case of invalid records, you can tweak the Input Regex (Regular Expression) and re-validate the
model. For more details, see the Model Customization section.
9. Click Save when you are satisfied with the model.
Even if there are erroneous records, you can still save the model. Then, during the final load, those
records will result in erroneous data being loaded in the final table. In such cases, you can
separately apply data corrections rules to weed out those records.
Clubbing Columns
Consider a scenario in which you want to club columns appearing in the Data Model Preview pane. You
can do it by deleting any one of the columns and then update the column name and the Input Regex of
the retained column appropriately.
Suppose you want to combine Status and Size columns, as shown in the following figure.
• Click to refresh/ reset the Input Regex based on the modifications you did.
• Click Validate to generate the model again.
Adding New Columns
Consider a scenario where you want to split a single column appearing in the Data Model Preview pane to
appear as multiple columns. This can be done by clicking Add and tweaking Input Regex, appropriately.
For example, if you want to split the Time column into Date and Time columns as shown in the following
figure.
• Click Add to add a new column. A new record is added in the last.
• Enter the Regex appropriately for both columns.
• If you want to add a column in between, change the Input Regex field appropriately. That is, Regex
of the newly added column should be added after the Regex of the column where you want to insert
the new column. Even though in the Data Model Preview pane, it does not get reflected, it is
displayed properly in the Data Validation pane.
URI and Referer Parsing
URI and Referer fields are considered complex attributes since apart from the hierarchical part
(scheme://example.com:123/path/data), there is a query part to it (?key1=value1&key2=value2). The query
part by convention is mostly a sequence of attribute-value pairs. SMG process identifies these keys as
potential attributes of interest and therefore, an option to keep them in the Data Model is provided.
Both in Standard and Custom logger methods, the URI and Referer fields will show icon, only if the
selected record’s URI or Referer field has a query part to it. You can choose a different record with a query
part instead.
• Click .
The Attribute Browser window is displayed.
• Enter the number of records you want to look up beyond the previously selected n records for
attributes and click .
The Available Attributes column will get refreshed.
• Select the required attributes that you want to add as columns in your Data Model and click OK.
• Click Add to add an attribute that is not part of the data file.
• Click Save.
Specify the Filter criteria by entering details in the Starts with, Contains, and Ends with fields.
Filters are patterns for entity names in the Database and can restrict the source model
generation to a specific set of entities. The Source Model is generated even if one of the
specified filter conditions matches. You can also specify multiple conditions for a single filter
type using comma-separated values. For example, tables starting with TB and TM can be
specified as “TB, TM”.
b. If erwin is selected:
Select the required erwin File from the drop-down list. The files that are placed inside the
ftpshare/<Infodom name>/dmt/erwin folder are displayed in the drop-down list.
Or
Click Attach and select the erwin file from your local system. Click Upload. You can see the
progress of the file upload in percentage. After being uploaded, select that file from the drop-
down list.
6. Click Save. The Data Source definition will be saved as version 1.
5. The Source Date Format field is not editable. The supported source date format is YYYY-MM-DD.
You can click button to view the related information in a pop-up dialog pertaining to a field.
2. Enter the details as tabulated:
Field Description
File Sort
This section is applicable for File Type selected as Delimited or Fixed.
Sort Basis Select the basis on which the data file should be sorted, from the drop-
down list. The options are:
• Entire Record- By default, this option is selected.
• Primary Key- Select this option if the destination table has
primary keys.
• List of Fields- Select this option if you want to sort based on
some particular field.
Field Description
Sort Order Select whether you want to sort the data file based on Binary or
Linguistic, from the drop-down list.
Sort File Select whether you want to sort it in Ascending or Descending order,
from the drop-down list.
Sort Fields This field is applicable only if you have selected Sort Basis as List of
Fields.
Specify the field based on which you want to sort the data file.
Miscellaneous
Record Delimiter Specify the record separator used in the data file.
By default, \n is selected as record delimiter. Modify if required.
Note: This is the only field applicable in case of WebLogs.
File Date Format Select the Regional Settings from the drop-down list if the Data File is
created with the date format of the Regional Settings of the Database
server.
By default, Database Settings is selected.
Oracle
This section is applicable only if File Type is selected as Delimiter.
Optionally Enclosed By Specify any optional Field Identifier used in the Data File, apart from
the Field Delimiter. It can be Fields enclosed by "Field".
Rules
This section is applicable only if File Type is selected as Delimited or Fixed.
Check Rules Select Header, Trailer, Header and Trailer or No from the drop-down
list depending on where the Validity rules are specified in the Data File.
If you select No, all other fields will be disabled.
Header Identifier This field is enabled only if you select Header or Header and Trailer
options for Check Rules.
Specify the first character or string that identifies the header record.
Data File Name Select Yes if the name of the Data File is part of the Header/Trailer.
Information Date Select Yes if Information Date (MIS Date) in the Data File is provided
as part of Header/Trailer.
Number of Records Select Yes if the number of records in the Data File is provided as part
of the Header/Trailer.
Field Description
Check Sum Select Yes if Check Sum value in the Data File is provided as part of
Header/Trailer.
NOTE:
For checksum to be computed in F2T, it is mandatory that there must
be a column mapping to identify the current load. The supported
mappings are as follows:
1. Constant mapped to #MISDATE
2. Constant mapped to #FILENAME
Basis of Check Sum Specify the Source Column name on which the Check Sum is
computed. Ensure that the source column is a numeric column.
Trailer Identifier This field is enabled only if you select Trailer or Header and Trailer
options for Check Rules.
Specify the first Character or String that identifies the Trailer Record.
Header Field Order This field is enabled only if you select Header or Header and Trailer
options for Check Rules.
Specify the header field order as comma separated values: 1-Header
Identifier,2-Data File Name, 3-Information Date, 4-Number of records,
5-Value of Checksum, 6-Basis of Checksum.
For example, if you specify 1, 3, 2, 4, 5, 6; the header fields will be
Header Identifier, Information Date, Data File Name, Number of
records, Value of Checksum, Basis of Checksum.
Trailer Field Order This field is enabled only if you select Trailer or Header and Trailer
options for Check Rules.
Specify the Trailer field order as comma separated values-: 1- Trailer
Identifier,2-Data File Name, 3-Information Date, 4-Number of Records,
5-Value of Checksum, 6-Basis of Checksum.
3. Click Ok.
1. From the Data Sources window, turn OFF the Active toggle button and click Search.
All inactive definitions are displayed.
1. From the Data Sources window, select the data source that you want to edit and click Edit. The
Data Source window is displayed.
2. Modify the required details. You cannot modify Code and Name.
For more information, see Creating a Data Source section.
3. Click Save.
The definition will be saved as the highest version +1. That is, if you are modifying a definition of
version number as 3 and the highest version available is 5, the definition will be saved as version 6.
1. From the Data Sources window, select the data source that you want to view and click View. The
Data Source window is displayed.
The Data Source window displays the details of the selected Data Source definition. The Audit Panel
section displays the creation and modification information of the Data Source definition. The
Comments section displays additional information or notes added for the definition if any.
1. From the Data Sources window, select the data source that you want to copy and click Copy. The
Data Source window is displayed.
2. Enter Code and Name for the definition. Modify the required fields.
For more information, see Creating a Data Source section.
1. From the Data Sources window, select the data source that you want to delete and click Delete.
You can select multiple Data Sources for deletion. A confirmation message is displayed.
2. Click Yes to confirm the deletion or No to cancel the deletion.
NOTE File present in the HDFS system cannot be loaded into RDBMS
target Infodom.
F2T and F2H can be defined from the Data Mapping window.
There is no separate Data File Mapping window.
Data movement between Hive and RDBMS can be enhanced using third-party tools like SQOOP and
Oracle Loader for Hadoop (OLH). You must set parameters from the DMT Configurations window. For
details, see the DMT Configurations section. For details on the configurations for SQOOP and OLH, see
OFSAAI Administration Guide available in the OHC Documentation Library.
For the configurations required to support WebLog ingestion (L2H), see the Data Movement of WebLog
Source to HDFS target section in the OFSAAI Administration Guide available in the OHC Documentation
Library.
The Data Mappings window displays the list of pre-defined Data Mapping definitions with Record Status
as Executable with details such as Code, Name, Source, Type, Created By, Creation Date, Version, and
Active. You can add, view, modify, delete, or purge Data Mapping definitions. You can make any version of
a Data Mapping definition as the latest. For more information, see Versioning and Make Latest Feature of
Data Mapping.
For sorting the fields, mouse-over at the end of the Column heading and click to sort in the ascending
order or click to sort the fields in the descending order.
You can search for a Data Mapping definition based on Code, Name, Type (F2T, T2F, and T2T), Source,
and Record status. The options for Record Status are Executable, Active, Inactive, and Deleted.
Executable- Displays all active versions of Data Mapping definitions and inactive versions of the
same Data Mapping definitions with distinct sources.
Active- Displays only the active version of all Data Mapping definitions.
Inactive- Displays all the inactive versions of Data Mapping definitions.
Deleted- Displays all the deleted Data Mapping definitions.
NOTE If DB2 is selected as the source database, map data from Table
to File (T2F) and then File to Table (F2T).
Processing on Datatypes TIMESTAMP WITH TIME ZONE and
TIMESTAMP WITH LOCAL TIME ZONE is not supported, even
though source model generation is supported for those
datatypes.
The ID will be automatically generated after you create a data mapping definition. The Folder field
is not enabled.
2. Enter a distinct Code to identify the Data Mapping definition. Ensure that the code is alphanumeric
with a maximum of 50 characters in length and there are no special characters except underscore
“_”.
3. Enter the Name of the Data Mapping definition.
4. Enter a Description for the Data Mapping definition.
4.4.1.3 Defining Data Mapping to Table (T2T, F2T, H2T, T2H, H2H, F2H, L2H)
In case of F2T or F2H, the source data file should be located at
/ftpshare/<INFODOM>/dmt/source/<SOURCE_NAME>/data/<MIS_DATE>. In case of multi-tier
setup, if the dmt/source/<SOURCE_NAME>/data/<MIS_DATE>/ folder structure is not present in
/ftpshare/<INFODOM> location, manually create the folder structure.
For local L2H executions, create the execution file path explicitly in the app layer. Since the source folders
get created in web local path, the execution searches for data file in the
ftpshare/<infodom>/dmt/<sourcename>/data/<datefolder>/ folder in the app layer.
NOTE Data source based on a file present in the HDFS system cannot be
loaded into an RDBMS target Infodom.
3. Select the required table from the Source Entities drop-down list.
The list displays all the tables that are part of the source model.
The selected source entity attributes are displayed in the Source Entities pane.
4. Select the target table from the Target Entities drop-down list.
The selected entities are displayed in the Target Entities pane of the Target Table Map Panel.
If the Target column is a partitioned column, it is indicated using a superscript P and if it has a static
value, mouse over the column to display the partition value.
To view the Entity details, select an entity and click . To remove an Entity from the Definition
pane or Target Entities pane, select the entity and click . You cannot remove an entity if any of its
attributes are mapped. The mapped attribute is indicated using a superscript m.
Click to automatically map between source attribute and target attribute. Auto mapping
happens if both source and target attributes have the same name.
To remove a mapping, select the target column and click . To remove all mappings in the
Target Entities pane, click .
To remove all mappings from a Target Entity, select the target table from the Target Entities
pane and click .
To define an expression to transform a source column and map it to a target column:
Select EXPRESSION from the Source Entities pane, select an attribute from the Target Entities
pane and click Transform Map. From the Expression Builder window, define an
expression to transform the column.
To modify an expression, expand EXPRESSION from the Source Entities pane, select the
expression you want to modify and click Transform Map. Modify the expression from the
Expression Builder window. This will modify the value for all target columns mapped to this
expression irrespective of the target column selected while defining the expression.
A confirmation pop-up message is displayed.
To map an existing expression to a new target column, expand EXPRESSION from the Source
Entities pane, select the expression you want to map and click .
NOTE For a single DI Mapping, you can use different target tables.
That is, after mapping a source column to a column in a Target
Entity, you can select another Target Entity and start mapping
source columns to that target table columns. Also, the same
source column can be mapped to different target columns of
different target entities.
6. For F2T definition, you can map Row Level Transformation (RLT) functions, that is, SysDate() and
Constant values to a target column:
Select SysDate() under Entity Details in the Source Entities pane and the required target column
in the Target Entities pane and click . The target column should be a Date column.
Select Constant Value under Entity Details in the Source Entities pane and the required target
column in the Target Entities pane and click . Select the required constant value type from
the drop-down list. The supported constant values are #DEFINITIONNAME, #SOURCENAME,
#MISDATE, and #FILENAME. Ensure the Data Type of the target column matches with the
constant value Data Type.
The options for Constants are:
#DEFINITIONNAME- The name of the Data Mapping (F2T) definition will be transformed
at Row level and loaded into a mapped target column.
#SOURCENAME- The name of the Source on which the Data Mapping (F2T) definition is
defined will be transformed at Row level and loaded into a mapped target column.
#MISDATE- Execution date of the Data Mapping (F2T) definition will be transformed at
Row Level and loaded into the mapped target column.
#FILENAME- The name of the file used for loading will be transformed at Row Level and
loaded into the mapped target column.
Others- Enter user-defined constant value in the textbox provided. To map a constant
date to a target column, the date has to be given in NLS format of the database. That is, if
the NLS format is DD-MON-RR, in the text box value should be 25-OCT-19.
If you are mapping from multiple Source Tables, define an expression to join the column data
corresponding to each table. You can pass Runtime Parameters through Expressions, Joins, and
Filter conditions. For more information, see Passing Runtime Parameters in Data Mapping.
7. Specify the ANSI Join or Join to join the source tables and enter the Filter criteria and Group By to
include during extraction. For example, “$MISDATE” can be a filter for Run-time substitution of the
MIS Date.
8. Specify any Source Prescript or Target Prescript if you want to use it. Prescripts are supported for
all HIVE based target Infodoms, that is, for H2H and T2H definitions. In case of H2T, the prescripts
are fired on the source.
For more information, see Prescripts.
9. Specify Source Hint and Target Hint (if any) for faster loading. Oracle hints follow the format as /*+
HINT */. The mapping level hint is applicable for T2T, H2T, and H2H definitions only.
For example, /*+ PARALLEL */.
The Target Table Map Details pane displays the mapping details.
NOTE The View SQL and Validate buttons will be enabled only if
your user group is mapped to the User Role DMADV.
10. Click View SQL to view the complete query in the SQL/Plan pane.
11. Click Validate to validate the query by converting to the selected data source. If Validation is
successful, the Explain Plan for the SQL query is displayed in the SQL/Plan pane. Otherwise, the
SQL exception is displayed.
12. To modify an expression, select the expression name and click Edit Expression. Modify the
expression in the Expression Builder window.
For T2T definitions, it is recommended to use source-level expressions because the source and
target expressions are similar in T2T. Target expression for T2T is mainly provided to edit the target
level expression of the migrated Data Mapping definitions.
13. Click OK in the DI Mapping window.
14. Click Properties to specify the properties.
See Specifying Properties for Load To Table Option.
15. Click Save to save the mapping details. The Data Mapping definition will be saved as version 1.
• T2T
• T2H
• H2H
• F2H
• H2T
• F2T
The following table describes the Property Name and Value in the Properties window.
Constraints
Delete Duplicate Select Yes if you want to delete the duplicate records after insertion when
Primary Keys are disabled.
Disable Primary Key Select Yes to disable Primary Key while loading the data.
In case of Batch and Bulk modes, if any of the foreign keys are in Disabled
state before loading the data using T2T or the property Disable Primary
Key is set to Yes, then all the Primary Keys and corresponding Foreign Keys
are disabled before loading and are enabled back after loading. Hence the
initial status of foreign and primary keys can be changed from Disabled to
Enabled.
In case of Direct mode, if the Disable Primary Key property is not set
(selected as No), then the Delete Duplicate property is set to Yes
automatically, which in turn reports all the duplicate records in the error log
table.
File
Frequency Select the frequency of loading the data file into Data Warehouse. This
property can be used to schedule Batch operations.
The options are Daily, Weekly, Monthly, Quarterly, Yearly, and One Time
Load.
Load Empty If this is set to Yes, the task will be successful even if there are no records to
load or if all the records are discarded or rejected.
MIS Date Field Specify the MIS Date field in the source data file. If MIS Date is not part of
the download, then you can use the MISDate () function in the Data
Mapping window to add MIS Date to the table automatically.
Loading
Load Previous Set to Yes if you want to load the data of the previous period when the
current period data is not available.
Loading Type Select the loading type from the drop-down list. The options are:
• Insert- The records will be overwritten.
• Append- The records will be appended to the target table.
Read Priority Choose the priority of reading the data from either Memory Store or
Persistent Store, from the drop-down list.
Write Priority Choose the priority of writing the data into either Memory Store or
Persistent Store, from the drop-down list.
Loading Mode
Record Load Limit If the number of records in the source table exceeds the Record Load Limit
value, the data loading will not happen. If the value is set as 0 or not
specified, the record count check is skipped.
Direct or Batch or Bulk Specify the Loading Mode as Direct, Batch, or Bulk.
In Bulk Mode of loading, note that:
Loading is possible only when the target database and the data source
created for the definition are in the same database.
If the schema used for source and target is different but the database is the
same, then the target schema should be granted “Select” access for the
source table.
You cannot specify the Batch Size and commit happens at the end of batch
load.
Batch loading is faster for fewer records as compared to a larger number of
records that sometimes leads to loss of data while loading.
Batch Size Specify the Batch Size if you want to load the records in batches. The ideal
values for batch sizes are 1024, 2048, 10000, or 20000. Huge batch sizes
may result in failure if the required system resources are not available.
If it is not specified, commit is done on the entire set.
Source Fetch Size Specify the Source Fetch Size for fetching data from the source system.
For T2T definitions, Source Fetch size is applicable to both Batch and Direct
loading methods.
For example, the default Source Fetch Size for Oracle JDBC is 10.
Rejection
Rejection Threshold Enter the maximum errors in absolute value that a Data File can have and
the Data Load will be marked successful.
After the erroneous record count exceeds the Rejection Threshold value,
the data loading task will fail and the inserted values will be rolled back for
that table. Inserts for the previous tables won't be reverted. Rejection
Threshold will be applied to each target table individually in a batch.
By default, the value is set as UNLIMITED.
Note the behavior of Rejection Threshold and Rejection Threshold %:
• Rejection Threshold is checked before Rejection Threshold %. If you set
a value for Rejection Threshold, it will be considered as the rejection
limit and any value given to Rejection Threshold % is not considered.
• If you set the Rejection Threshold as UNLIMITED or blank, it checks for
Rejection Threshold % and the value set for Rejection Threshold % will
be taken as rejection limit.
• If you set both Rejection Threshold and Rejection Threshold % as
UNLIMITED or blank, the whole Data file will be loaded irrespective of
the number of errors.
Rejection Threshold % Set Rejection Threshold as a percentage of the number of rows in the Data
file.
Enter the maximum errors that a Data File can have as a percentage of the
number of rows in the data file and the Data Load will be marked as
successful.
By default, the value is set as UNLIMITED.
Rejection Threshold % is considered only if Rejection Threshold is set to
UNLIMITED or blank.
The following table describes the Property Name and Value in the Properties window.
Loading
Loading Type Select the loading type from the drop-down list. The options are:
Insert- The records will be overwritten.
Append- The records will be appended to the target table.
Read Priority Choose the priority of reading the data from either Memory Store or
Persistent Store, from the drop-down list.
Write Priority Choose the priority of writing the data into either Memory Store or Persistent
Store, from the drop-down list.
Loading Mode
Record Load Limit If the number of records in the source table exceeds the Record Load Limit
value, the data loading will not happen. If the value is set as 0 or not specified,
the record count check is skipped.
Source Fetch Size Specify the Source Fetch Size for fetching data from the source system.
For example, the default Source Fetch Size for Oracle JDBC is 10.
Sqoop
Split By Column This is applicable only if you are using Sqoop for loading data into Hive tables.
Specify the Split By Column in the format “TableName.ColumnName”. It
should not be an expression. Additionally, the column should not be of data
type “Date” and it should not have Null data.
This is a mandatory field for T2H executions using Sqoop.
If you have not provided any value for this field, the T2H Sqoop engine defaults
the value to the last mapped source column.
Ideally, you should set the Split-by column to a PK numeric column. If the split
by column is String-based, Generic Options property needs to be set to -
Dorg.apache.sqoop.splitter.allow_text_splitter=true.
Generic Options This field is applicable only in Sqoop SSH mode.
Specify the generic arguments that will be appended before all the tool-specific
arguments. For example, -Doraoop.nologging=true
The following table describes the Property Name and Value in the Properties window.
Loading
Loading Type Select the loading type from the drop-down list. The options are:
Insert- The records will be overwritten.
Append- The records will be appended to the target table.
Read Priority Choose the priority of reading the data from either Memory Store or
Persistent Store, from the drop-down list.
Write Priority Choose the priority of writing the data into either Memory Store or
Persistent Store, from the drop-down list.
Loading Mode
Record Load Limit If the number of records in the source table exceeds the Record Load
Limit value, the data loading will not happen. If the value is set as 0 or
not specified, the record count check is skipped.
The following table describes the Property Name and Value in the Properties window.
File
Data File Enter the name of the Data File that needs to be extracted. You can specify
multiple files separated by ‘/’.
This property is useful to create metadata definitions for multiple Flat-Files
of the same structure by copying the Definition File.
Is File Local To Hive Server Select Yes if the file is on the server where HiveServer is running, else
select No from the drop-down list. This is applicable only for remote file
source.
Loading
Loading Type Select the loading type from the drop-down list. The options are:
Insert- The records will be overwritten.
Append- The records will be appended to the target table.
Read Priority Choose the priority of reading the data from either Memory Store or
Persistent Store, from the drop-down list.
Write Priority Choose the priority of writing the data into either Memory Store or
Persistent Store, from the drop-down list.
The following table describes the Property Name and Value in the Properties window.
Loading
Loading Type Select the loading type from the drop-down list. The options are:
Insert- The records will be overwritten.
NOTE:
Limitation: In the Insert Mode for H2T SQOOP Execution, the
Target Tables are truncated. If a Task fails, the changes cannot be
rolled back.
Append- The records will be appended to the target table.
Read Priority Choose the priority of reading the data from either Memory Store or
Persistent Store, from the drop-down list.
Write Priority Choose the priority of writing the data into either Memory Store or
Persistent Store, from the drop-down list.
Loading Mode
Record Load Limit If the number of records in the source table exceeds the Record Load
Limit value, the data loading will not happen. If the value is set as 0 or
not specified, the record count check is skipped.
Batch Size Specify the Batch Size if you want to load the records in batches. The
ideal values for batch sizes are 1024, 2048, 10000, or 20000. Huge batch
sizes may result in failure if the required system resources are not
available.
If it is not specified, commit is done on the entire set.
Rejection
Rejection Threshold Enter the maximum errors in absolute value that a Data File can have and
the Data Load will be marked successful.
Once the erroneous record count exceeds the Rejection Threshold
value, the data loading task will fail and the inserted values will be rolled
back for that table. Inserts for the previous tables won't be reverted.
Rejection Threshold will be applied to each of the target tables
individually in a batch.
By default, the value is set as UNLIMITED.
Sqoop
NOTE:
To parse the date column values, set this property as shown in the
follows:
• In Sqoop cluster:
--connection-param-file <path to the
ora.properties file on the sqoop node>
• In Sqoop client mode:
--connection-param-file
$FIC_DB_HOME/bin/ora.properties
Update the ora.properties file with the following parameter:
oracle.jdbc.mapDateToTimestamp=false
Use Staging Select Yes to use a staging table during Sqoop export.
The following table describes the Property Name and Value in the Properties window.
File
Frequency Select the frequency of loading the data file into Data Warehouse. This
property can be used to schedule Batch operations.
The options are Daily, Weekly, Monthly, Quarterly, Yearly, and One Time
Load.
MIS Date Field Specify the MIS Date field in the source data file. If MIS Date is not part of the
download, then use the MISDate() function in the Data Mapping window to
add MIS Date to the table automatically.
Data File Enter the data file name if it is different from the Definition name. This
property is useful to create metadata definitions for multiple Flat-Files of the
same structure by copying the Definition File.
Note: For F2T CPP execution, you should not enter “/ “ in the Data File name.
Load Empty If this is set to Yes, the task will be successful, even if there are no records to
load or if all the records are discarded or rejected.
Prefix Enter the string that is prefixed with the data file name separated by an
underscore (_).
Constraints
Disable Primary Key Select Yes to disable Primary Key while loading the data.
In case of Batch and Bulk modes if any of the foreign keys are in Disabled
state before loading the data using T2T or the property Disable Primary Key
is set to Yes, then all the Primary Keys and corresponding Foreign Keys are
disabled before loading and are enabled back after loading. Hence the initial
status of foreign and primary keys can be changed from Disabled to Enabled.
In case of Direct mode, if the Disable Primary Key property is not set
(selected as No), then the Delete Duplicate property is set to Yes
automatically, which in turn reports all the duplicate records in the error log
table.
Disable Check Constraints Select Yes if you want to disable the Check Constraints on columns of the
table or select No to load with the constraints enabled.
Loading Mode
Record Load Limit If the number of records in the source file exceeds the Record Load Limit
value, the data loading will not happen. If the value is set as 0 or not
specified, the record count check is skipped.
Loading
Load Previous Set to Yes if you want to load the data of the previous period when the
current period data is not available.
Loading Type Select the loading type from the drop-down list. The options are:
• Insert- The records will be overwritten.
• Append- The records will be appended to the target table.
Duplicate Row
Duplicate Row Checks Select Yes to check for Duplicate Rows and to remove them from the Data
File.
Duplicate Row This field determines which of the Duplicate Record(s) to be removed if
found. The options are Keep Last Occurrence and Keep First Occurrence.
Misc
Abort-Failure Condition Select Stop to stop the loading on reaching the Rejection Threshold. Select
Continue to ensure the reading of the entire Data File.
Query Enter the Query that needs to be executed before file loading.
Discard Max Enter the maximum errors allowed for SQL*Loader Discards while loading.
Edit and Reload Select Yes to have the option of editing the error file and re-loading it.
Oracle
Continue If Enter a condition which when satisfied will continue the file load.
Direct Load • Select Yes to do Fast Load into the Oracle Database only if you have not
defined any target expressions.
• Select Force to do Fast Load into the Oracle Database if target
expressions have only constant values.
• Select No if you do not want to enable Fast Load.
Load When Enter a condition which when satisfied will start the file load.
Parallel Load Select Yes to load the data in parallel into the Database table for faster
loading, else select No.
Preserve Blanks Select Yes to retain blank values in the Data without trimming.
BINDSIZE For conventional path loads, BINDSIZE specifies the maximum size (bytes) of
the bind array. The size of the bind array given by BINDSIZE overrides the
default size (which is system dependent) and any size determined.
Number of ROWS For conventional path loads, ROWS specifies the number of rows in the bind
array.
For direct path loads, ROWS identifies the number of rows you want to read
from the data file before a data save. The default is to read all rows and save
data once at the end of the load.
Trailing Null Columns Select Yes to retain Trailing Null Columns in the Data File.
Growth
Incremental Growth Enter the Incremental Growth of Data in absolute values over the previous
period.
Incremental Growth % Enter the Incremental Growth of Data in percentage over the previous period.
Rejection
Rejection Threshold Enter the maximum errors in absolute value that a Data File can have and the
Data Load will be marked successful.
After the erroneous record count exceeds the Rejection Threshold value, the
data loading task will fail and the inserted values will be rolled back for that
table. Inserts for the previous tables won't be reverted. Rejection Threshold
will be applied to each of the target tables individually in a batch.
By default, the value is set as UNLIMITED.
Rejection Threshold is considered only if Rejection Threshold % is set to
UNLIMITED or blank.
If you set both Rejection Threshold % and Rejection Threshold as UNLIMITED
or blank, the whole Data file will be loaded irrespective of the number of
errors.
Rejection Threshold % Set Rejection Threshold as a percentage of the number of rows in the Data
file.
Enter the maximum errors as a percentage of the number of rows in the data
file, which a Data File can have and the Data Load will be marked as
successful.
By default, the value is set as UNLIMITED.
Note the behavior of Rejection Threshold % and Rejection Threshold:
• Rejection Threshold % is checked before Rejection Threshold. If you set a
value for Rejection Threshold %, it will be considered as the rejection
limit and it will not check Rejection Threshold.
• If you set Rejection Threshold % as UNLIMITED or blank, it checks for
Rejection Threshold and the value set for Rejection Threshold will be
taken as rejection limit.
• If you set both Rejection Threshold and Rejection Threshold % as
UNLIMITED or blank, the whole Data file will be loaded irrespective of the
number of errors.
The Select Entity grid displays all entities in the selected Source or Infodom. Expand the Entity name
to view the attributes in each entity.
3. Select the required entities or attributes you want to extract to file:
Select an entity and click if you want to extract all attributes in an entity.
For extracting only selected attributes in an entity, expand the required entity, select the
attribute and click .
To remove an attribute from the Selected Values, select the attribute and click .
NOTE Whenever you make any changes in the Select Entity grid, click
Select to refresh the Source Entity Details grid to reflect the
changes.
5. If you are mapping from multiple Source Tables, define an expression to join the column data
corresponding to each table. Specify the ANSI Join or Join to join the source tables and enter the
Filter criteria and Group By to include during extraction. For example, “$MISDATE” can be a filter
for Run-time substitution of the MIS Date.
Click Add to add a new custom column by defining it from the Expression Builder window.
Click Edit to edit the Expression Value defined using the Expression Builder window. You
can also edit the expression value by double-clicking the Expression Value column and
manually typing the proper expression.
Double-click the Field Order number and update the value to change the order in which
columns should appear in the target file.
Double-click the Logical Data Type and select the required option from the drop-down list to
change the Data Type of the target column. The available Data types are Number, String, Date
Time, Integer, and Timestamp.
Double-click the Date Format and modify the date format, if required, for the target column.
Select an attribute and click Delete if you do not want that attribute in the target file.
NOTE The View SQL and Validate button will be enabled only if your
user group is mapped to the User Role DMADV.
9. Click View SQL to view the complete query in the SQL Plan pane.
10. Click Validate to validate the query by converting to the selected data source.
If validation is successful, the Explain Plan for the SQL query is displayed in the SQL Plan pane.
Otherwise, the SQL exception is displayed.
11. Click Ok to save the changes in the Entity Selection window.
12. Click Properties to specify the properties.
See Specifying Properties for Extract To File Option section.
13. Click Save to save the mapping details.
The Data Mapping definition will be saved as version 1.
The following table describes the fields in the Modal Dialog window.
File
Suffix • Select No if you do not want to suffix the data file name.
• Select Information Date if you want to suffix the data file name with
Information Date or MIS Date in YYYYMMDD format separated by an
underscore (_).
Prefix Enter the string that you want to prefix with the data file name separated by
an underscore (_).
Misc
Field Delimiter Enter the field separator used in the Data File. By default, comma (,) is
selected.
Rules
Check Rules Select Header, Trailer, Header and Trailer or No from the drop-down list
depending on where the Validity rules are specified in the Data File.
Header Identifier This field is enabled only if you select Header or Header and Trailer options
for Check Rules.
Specify the first Character or String that identifies the Header Record.
Header Field Order This field is enabled only if you select Header or Header and Trailer options
for Check Rules.
Specify the header field order as comma separated values-: 1-Header
Identifier,2-Data File Name, 3-Information Date, 4-Number of records, 5-
Value of Checksum, 6-Basis of Checksum.
For example, if you specify 1,3,2,4,5,6; the header fields will be Header
Identifier, Information Date, Data File Name, Number of records, Value of
Checksum, Basis of Checksum.
Trailer Identifier This field is enabled only if you select Trailer or Header and Trailer options
for Check Rules.
Specify the first Character or String that identifies the Trailer Record.
Trailer Field Order This field is enabled only if you select Trailer or Header and Trailer options
for Check Rules.
Specify the Trailer field order as comma separated values-: 1- Trailer
Identifier,2-Data File Name, 3-Information Date, 4-Number of Records, 5-
Value of Checksum, 6-Basis of Checksum.
Data File Name Select Yes if the name of the data file should be provided as part of the
Header/Trailer.
Information Date Select Yes if the Information (MIS) Date in the Data File should be provided as
part of the Header/Trailer.
Number of Records Select Yes if the number of records in the Data File should be provided as
part of the Header/Trailer.
Checksum Select Yes if a Check Sum Value should be provided as part of the
Header/Trailer.
Basis of Checksum Specify the Source Column Name on which the Check Sum is computed. It
has to be a Numeric column.
Source Fetch Size Specify the Source Fetch Size for fetching data from the source system.
This property is applicable only for T2F.
For example, the default Source Fetch Size for Oracle JDBC is 10.
3. Spark Session Management- In a batch execution, a new Spark session is created when the first
H2H-spark task is encountered and the same spark session is reused for the rest of the H2H-spark
tasks in the same Run.
For the spark session to close at the end of the run, set the CLOSE_SPARK_SESSION to YES in the
last H2H-spark task in the batch.
Execution through Operations module- Pass [CLOSE_SPARK_SESSION]=YES while defining the
last H2H-Spark task from the Task Definition window.
For more information, see Component: LOAD DATA section.
Execution through RRF module- Pass the following as a parameter while defining the last H2H-
spark job from the Component Selector window:
“CLOSE_SPARK_SESSION”,”YES”
4.4.1.9 Prescripts
Prescripts are fired on a Hive connection, before firing a select from or insert into a Hive table. While
defining a Prescript, note the following:
• Prescript should mandatorily begin with the keyword "SET".
• Multiple Prescripts should be semi-colon separated.
• Prescripts are validated for SQL Injection. The following key words are blacklisted:
"DROP","TRUNCATE","ALTER","DELETE","INSERT","UPDATE","CREATE", "SELECT"
All validations applicable in the UI are checked on execution also. If a prescript fails any of the validations
or if there is an error in firing the pre-script, the load operation is exited.
Static partition value can also be set with placeholders. The placeholders supported in Data Mapping are
$RUNID, $PHID, $EXEID, $RUNSK, $SYSDATE, $TASKID, and $MISDATE. Additionally, partition value can
be provided as a parameter within square brackets. For example, [PARAM1]. Passing the parameter values
at runtime from RRF/ Operations module is same as for the other Run-time parameters in Data
Management Framework. Value for the placeholders/ additional parameters will be substituted as the
static partition values during the Run-time. For more information, see Passing Runtime parameters in
Data Mapping.
3. Click Ok.
1. From the Data Mapping window, select INACTIVE from the Record Status drop-down list and click
Search.
All inactive definitions are displayed.
The Post Load Changes Summary window displays the list of pre-defined Post Load Changes definitions
with details such as Code, Name, Type, Created By, Creation Date, Version, and Active status. You can add,
view, modify, authorize, delete or purge Post Load Changes definitions. Note that copy functionality is not
yet available. You can make any version of a Post Load Changes definition as latest. For more information,
see Versioning and Make Latest Feature.
For sorting the fields, hover over the Column heading and click to sort in the ascending order or click
to sort the fields in the descending order.
You can search for a Post Load Changes definition based on Code, Name, Type, and Record Status
(Active, Inactive or Deleted). In the Search and Filter pane, enter the details of the Post Load Changes
definition you want to search in the respective fields and then click Search.
The ID is automatically generated once you create a data mapping definition. The Folder field is not
enabled.
2. Enter a distinct Code to identify the transformation definition. Ensure that the code is alphanumeric
with a maximum of 50 characters in length and there are no special characters except underscore
“_”.
3. Enter the Name of the transformation definition.
4. Enter a Description for the transformation definition.
5. Select the PLC Type from the drop-down list. The options are:
Insert Transformation
Update Transformation
Stored Procedure
External Library
The following table describes the fields in the Source Shuttle pane.
Field Description
Field Description
Source Click Source Entity Selection. The Source Entities window is displayed.
Join/Filter Click to define the join or filter condition for the source entities. The
Expression Builder window is displayed.
For more information, see Expression Builder.
3. From the Transformation Logic pane, perform the following tasks to add the transformation logic:
f. Click Generate Logic to generate the transformation logic and view the SQL query in the Query
Generated grid.
NOTE The Generate Logic button is enabled only if your user group
is mapped to the User Role DTADV.
4. Click Check Syntax (adjacent to the Save button) to check the syntax of the query generated.
5. Click Save to save the definition.
The Post Load Changes definition is added to the Post Load Changes Summary window.
3. In the Stored Procedure Editor field, enter the CALL function to invoke the function stored in the
Atomic Schema. You can also enter the SQL block of the stored procedure/function. Ensure that all
the parameters used in your stored procedure are added from the Parameter Definition grid. Every
function you create should contain BatchID (VARCHAR2) and MisDate (VARCHAR2) as the first two
parameters.
4. (Optional ) Click Check Syntax (adjacent to the Save button) to check the syntax of the stored
procedure.
5. Click Save to save the Stored Procedure Transformation definition.
3. In the External Library detail grid, enter the name of the executable library file (.sh file) located in the
default ficdb/bin path in the External library field. You can also specify the path till the file
name.
4. Click Save to save the External Library Transformation definition.
1. From the Post Load Changes Summary window, turn OFF the Active toggle button and click
Search. All inactive definitions are displayed.
3. Click Save. The definition will be saved as highest version +1. That is, if you are modifying a
definition of version number as 3 and the highest version available is 5, the definition will be saved
as version 6.
1. From the Post Load Changes Summary window, select the definition you want to delete and click
Delete.
You can select multiple definitions for deletion.
2. Click OK in the information dialog to confirm deletion.
The User Defined Functions Summary window displays the available UDFs with details such as Function
Name, Function Description, Origin, Type, and Category. You can add new UDFs, modify, view, and purge
existing UDFs.
4.6.1.1 Prerequisites
1. The UDF JAR must be present in the Hive Auxiliary JARs path.
To create an Auxiliary JAR path, see Cloudera Documentation on Creating Temporary Functions.
2. If you want to use Permanent functions, following are the additional prerequisites:
a. Create permanent functions as shown in the following example:
Execute the following command from Hive CLI/Hue/Hive browser:
CREATE FUNCTION toChar AS 'com.ofs.aai.service.dmt.udf.custom.TO_CHAR
USING JAR 'hdfs:///path/to/jar'
Field Description
Field Description
Origin Select the Origin from the drop-down list. Only HIVE is supported now.
Type Select the function type from the drop-down list. The options are
TEMPORARY and PERMANENT.
Note: Permanent Functions must be saved individually from Hive
CLI/Hue/Hive browser before registering in OFSAAI using the UI.
Category Select the category of the function from the drop-down list.
For HIVE, the categories available are UDF, UDAF, and UDTF.
3. Click Save.
1. From the UDF Summary window, select the UDF and click View from the toolbar.
The UDF Registration window is displayed.
2. You can view the details of the selected UDF definition.
3. Click Close.
1. From the User Defined Functions Summary window, select the UDF and click Edit from the
toolbar.
The User Defined Functions Registration window is displayed.
2. Modify the required details.
For more information, see Creating User Defined Functions (UDFs).
The following table describes the fields in the DMT Configurations window.
Table 19: Fields in the DMT Configurations window and their Description
Generic
T2T Mode Select the mode of T2T to be used for execution of Data Mapping definition, from
the list. The options are Default (for Java engine) and CPP (for CPP engine).
H2T Mode Select the mode of H2T to be used for execution of Data Mapping definition, from
the list. The options are Default, Sqoop, and OLH.
OLH (Oracle Loader for Hadoop) must have been installed and configured in your
system. For more information on how to use OLH for H2T, see Oracle® Loader for
Hadoop (OLH) Configuration section in OFS Analytical Applications Infrastructure
Administration Guide.
Sqoop should have been installed and configured in your system. For more
information, see the Sqoop Configuration section in OFS Analytical Applications
Infrastructure Administration Guide. Additionally, you should register the cluster
information of the source Information domain using the Register Cluster tab.
T2H Mode Select the mode of T2H to be used for execution of Data Mapping definition, from
the list. The options are Default and Sqoop.
For the Default option, additional configurations are required, which is explained in
the Data Movement from RDBMS Source to HDFS Target (T2H) section in OFS
Analytical Applications Infrastructure Administration Guide. Additionally, you should
register the cluster information of the target Information domain using the Register
Cluster tab.
For the Sqoop option, Sqoop should have been installed and configured in your
system. For more information, see the Sqoop Configuration section in OFS
Analytical Applications Infrastructure Administration Guide. Additionally, you should
register the cluster information of the source Information domain using the Register
Cluster tab.
PLC Mode Select the mode of execution to be used for Post Load Changes definition, from the
list. The options are Default (for Java engine) and CPP (for CPP engine).
SCD MODE This field is applicable only if SCD uses a merge approach.
• DEFAULT_V1- Select this option to perform SCD execution using JAVA engine
with a single Merge query for both Update and Insert. This is the default
execution mode.
• DEFAULT _V2- Select this option to perform SCD execution using JAVA engine
with a Merge query for updates and Insert query for inserts. Since Insert is a
separate query, the sequence used for SKEY will be incremented only for the
required records making the SKEY column value continuous.
• CPP_V1- Select this option to perform SCD execution using CPP engine with a
single Merge query for both Update and Insert. This is the default execution
mode.
• CPP_V2- Select this option to perform SCD execution using CPP engine with a
Merge query for updates and Insert query for inserts. Since Insert is a separate
query, the sequence used for SKEY will be incremented only for the required
records making the SKEY column value continuous.
• BACKDATED_V1-Backdated support for CPP_V1.
• BACKDATED_V2- Backdated support for CPP_V2.
Note: For the Backdated Executions containing type 2 column mappings, below
column mappings are mandatory :
• Start date
• End date
Validate Definition Select Yes to validate the SQL Query of the Data Mapping definition on save.
Query on Save
Generic Working Specify the path of the HDFS working directory for generic operations. By default,
Directory the path is set as /user/ofsaa/GenericPath.
Allow Pre806 Data This field is applicable only in case of upgrade from an earlier version to 8.1.0.0.0
File Path version. If yours is a fresh installation of 8.1.0.0.0 version using Full installer, this
field is not applicable.
For F2T, the path for Data File in versions before 8.0.6.0.0 is
/<ftpshare>/STAGE/<FileBasedSource>/<MISDate>/<dataFile.dat>. In 8.1.0.0.0, it
is changed to /ftpshare/<INFODOM>/dmt/source/<Data Source
Code>/data/<MISDATE>/<dataFile.dat>.
Select Yes to allow the old Data File path in 8.1.0.0.0 version.
SMG Mode By default, the Source Model Generation (SMG) mode is set as Dictionary.
When SMG Mode is selected as Dictionary, the time taken for generating Source
models of Views from the database is optimized.
Select Default for the earlier mode.
Allow Pre806 T2F In the versions before 8.0.6.0.0, the T2F extract file path is
File Path <ftpshare>/STAGE/<SOURCE_CODE>/<MISDATE>.
Select Yes, if you want to set the preceding extract path.
If you select No, the extract file path is set to
<ftpshare>/<INFODOM>/dmt/def/<DEFINITION_CODE>/<BATCH_ID
>_<TASK_ID>/<MISDATE>.
Sqoop
(This section is applicable only if you select Sqoop for T2H Mode or H2T Mode.).
Sqoop Mode Select Client to execute Sqoop in client mode or select Cluster to execute Sqoop in
cluster mood, from the drop-down list.
If you select Cluster as Sqoop Mode, you should register the cluster from Register
Cluster tab. For more details, see Registering a Cluster.
Note: Copying of any Sqoop jars and Hadoop/Hive configuration XMLs to OFSAAI
is not required in cluster mode.
Sqoop Working Specify the path of the HDFS working directory for Sqoop related operations.
Directory
WebLog
(This section is applicable only for L2H)
Weblog Temp File Enter the extension of the Weblog temporary file.
Ext
Weblog Working Enter the name of the temporary working directory in HDFS.
Directory
File Encryption
Encryption At rest Select Yes from the drop-down list, if encryption is required for T2F or H2F and
decryption is required for F2T or F2H.
Key File Name Enter the name of the Key File that you used for encrypting the Data File.
Key File Path Enter the absolute path of the Key File that you used for encrypting the Data File.
The following table describes the fields in the DMT Configurations window.
Table 20: Fields in the DMT Configuration window and their Description
Generic
T2T Mode Select the mode of T2T to be used for execution of Data Mapping definition, from the list.
The options are Default (for Java engine) and CPP (for CPP engine).
PLC Mode Select the mode of T2T to be used for execution of Post Load Changes definition, from
the list. The options are Default (for Java engine) and CPP (for CPP engine).
SCD MODE This field is applicable only if SCD uses a merge approach.
• CPP_V1- Select this option to perform execution using a single Merge query for both
Update and Insert. This is the default execution mode.
• CPP_V2- Select this option to perform execution using Merge query for updates and
using Insert query for inserts. Since Insert is a separate query, the sequence used for
SKEY will be incremented only for the required records making the SKEY column
value continuous.
• BACKDATED_V1-Backdated support for CPP_V1.
• BACKDATED_V2- Backdated support for CPP_V2.
Note: For the Backdated Executions containing type 2 column mappings, below column
mappings are mandatory :
• Start date
• End date
Validate Definition Select Yes to validate the SQL Query of the Data Mapping definition on save.
Query on Save
Allow Pre806 Data This field is applicable only in case of upgrade from an earlier version to 8.0.6.0.0 version
File Path and above. If yours is a fresh installation of 8.1.0.0.0 version using Full installer, this field
is not applicable.
For F2T, the path for Data File in versions before 8.0.6.0.0 is
/<ftpshare>/STAGE/<FileBasedSource>/<MISDate>/<dataFile.dat>.
In 8.0.6.0.0, it is changed to /ftpshare/<INFODOM>/dmt/source/<Data
Source Code>/data/<MISDATE>/<dataFile.dat>.
Select Yes to allow the old Data File path in 8.1.0.0.0 version.
SMG Mode By default, the Source Model Generation (SMG) mode is set as Dictionary.
When SMG Mode is selected as Dictionary, the time taken for generating Source models
of Views from the database is optimized.
Select Default for the earlier mode.
File Encryption
Encryption At rest Select Yes from the drop-down list, if encryption is required for T2F and decryption is
required for F2T.
Key File Name Enter the name of the Key File, which you used to encrypt the Data File.
Key File Path Enter the absolute path of the Key File, which you used to encrypt the Data File.
This window allows you to register a new cluster, modify, view, copy, or delete an existing cluster. You can
search for a cluster based on Name.
For sorting the fields, mouse-over at the end of the Column heading and click to sort in the ascending
order or click to sort the fields in the descending order.
To register a cluster:
1. From the Register Cluster tab in the DMT Configurations window, click Add. The Cluster
Configurations window is displayed.
Table 21: Fields in the Cluster Configurations window and their Descriptions
Generic
Details
(This section is not applicable for Sqoop Cluster mode.)
Configuration File Path Enter the path where Kerberos Configuration files such as core-
site.xml, hdfs-site.xml reside.
Keytab File Name Enter the name of the Key Tab file.
KRB5 Conf File Name Enter the name of the Kerberos Realm file.
Hive Configuration XML Enter the name of Hive configuration XML file.
SSH Details
(This section is applicable only for Sqoop in Cluster mode.)
SSH Server Name Enter the IP address of the node having Sqoop client installed.
SSH Port Enter the SSH port on the node, usually 22.
SSH Auth Alias Select the Auth Alias entered for SSH server from the drop-down
list.
3. Click Save.
The Optimizations tab displays all active Data Mapping definitions available in the setup. Additionally, an
entry for OFSAA instance and Information Domain will be also be present. It displays Data Mapping
definition details such as Code, Name, Source Prescript, Source Hint, Target Prescript, and Target Hint.
You can edit, view and delete performance parameters.
• For T2T- Source Hint, Source Prescript, Target Hint, and Target Prescript are applicable.
• For T2F - Source Hint and Source Prescript are applicable.
• For F2T - Nothing is supported.
To configure Performance Parameters:
1. From the Optimizations tab in the DMT Configurations window, select the required Data Mapping
definition for which you want to configure performance parameters and click Edit. The
Performance Parameters window is displayed.
2. Specify Source Prescript or Target Prescript if you want to use it. Prescripts are supported for all
HIVE based target Infodoms, that is, H2H and T2H. In case of H2T, the prescripts are fired on the
source.
For more information, see Prescripts.
3. Specify Source Hint and Target Hint (if any), for faster loading. Oracle hints follow (/*+ HINT */)
format.
The mapping level hint is applicable for T2T, H2T, and H2H only.
For example, /*+ PARALLEL */.
4. Click Save.
The Slowly Changing Dimension Summary window displays the available SCDs with details such as Map
Reference Number, Table Name, Stage Table Name, and Source Priority. You can add new SCDs, modify,
view, and purge existing SCDs.
You can search for an SCD based on Stage Table Name, Dimension Table Name, and Map Reference
Number.
Figure 70: Fields in the Slowly Changing Dimension window and their Description
Define SCD
Map Reference Number Enter a Mapping Reference Number for this unique mapping of a
Source to a Dimension Table. The supported numbers are from 0
to 999.
If it is given as -1, SCD will execute for all Map Reference Numbers.
Source Priority Enter the priority of the source when multiple sources are mapped
to the same target.
Table Name Enter the dimension table name, whose record needs to be
updated.
SCD Details
Source Type Enter the type of the Source for a Dimension, that is, Transaction or
Master Source.
Data Offset Enter the offset for calculating the Start Date based on the File
Received Date.
Source Process Sequence Enter the sequence in which the various sources for the
DIMENSION will be taken up for processing.
3. Click from the Column Mapping tab. A new row gets added.
4. Double-click each cell to edit it. Enter the following details for each record.
The following table describes the fields in the Slowly Changing Dimension window.
Table 22: Fields in the Slowly Changing Dimension window and their Description
Colum Type Enter the type of the column. For information for the possible
values, see Column Types.
You must enter information about at least the following column
types:
PK- Primary key, SK -Surrogate Key, SD- Start Date, LRI - Latest
Record Indicator, ED - End Date, DA - Dimensional attribute and
MD - MIS Date.
Priority Lookup Required Specify whether Lookup is required for Priority of Source against
the Source Key Column or not. The possible values are Y and N.
5. Click the Optimizations tab to add optimizer hints for merge execution mode.
6. Enter statement-level optimizer hints for the merge statement in the Source Hint field.
7. Enter statement-level optimizer hint for the select statement in merge in the Merge Hint field.
8. Enter alter statements to enable session level execution before merge statement in the Session
Enable Statement field.
2. Click button adjacent to the component name. The Parameters window is displayed.
Add the CPP_DIRECT_EXECUTION variable to the .profile file, and set the following execution flags:
• When CPP_DIRECT_EXECUTION flag is set to “true”:
The DMT configuration properties - T2T_MODE and PLC_MODE will be overridden. When a
T2T/F2T/DT task is triggered by the ICC Batch Execution, the corresponding CPP Engine is invoked
in an optimized manner. The Java task logs will not be generated.
NOTE Restart the services, after adding the system variable. system
variable is added, you must restart the services.
DQ Auto Authorizer
DQ Phantom
DQ Read Only
DQ Write
DQ View Query
See Appendix A for the functions and roles required to access the framework.
The Data Quality Rule Summary window displays the list of pre-defined Data Quality Rules with other
details such as Name, Table, Access Type, Check Type, Folder, Creation Date, Created By, Last
Modification Date, Status, Is Grouped, Is Executed, Version, and Active. A defined rule is displayed in
Saved status until it is Approved/Rejected by the approver. The approved rules can be grouped further for
execution and the rejected rules are sent back to the user with the Approver comments.
You can add, view, modify, copy, approve/reject, resave, or delete Data Quality Rules within the Data
Quality Rule Summary window. You can make any version of a Data Quality Rule as the latest. For more
information, see Versioning and Make Latest Feature section. You can also search for a Data Quality Rule
based on Name, On Source, Source, Folder, Check Type, Table, or Record Status (Active, Inactive and All).
option enables all users to view, modify any fields (including Access Type), and delete the DQ
rule.
3. Select the Check Type from the drop-down list. The options are Specific Check, Generic Check,
and Control Total Check.
This check is used to define conditions based on individual checks on a single column.
4. Click and define the Filter condition using the Specify Expression window.
For more information, see Specify Expression.
NOTE While defining the filter condition, you can also include the
Runtime Parameter name, which you will be specifying in
Additional Parameters condition while executing the DQ Rule.
5. Define the required Validation Checks by selecting the appropriate grid and specify the details. You
can define nine specific validation checks based on Range, Data Length, Column Reference/Specific
Value, List of Value/Code, Null Value, Blank Value, Referential Integrity, Duplicity, and Custom
Check/Business.
Ensure that you select the Enable checkbox for every check to be applied as a part of rule.
While defining any of the validation checks, you must specify the Severity (Error, Warning, or
Information. You can add an Assignment only when the Severity is selected as Warning or
Information. Assignments are added when you want to correct or update record(s) in the base
column data / selected column data. However, selecting severity as Error indicates there are no
corrections and only facilitates in reporting the quantity of bad records.
Table 23: Fields in the Validation Checks window and their Descriptions
Range Check Range Check identifies if the base column data falls outside a specified range of Minimum
and Maximum value.
Example: If the Base Table is STG_CASA, Base Column is N_MIN_BALANCE_YTD,
Minimum value is 9, and Maximum value is 99, then the check with the Inclusive
checkbox enabled (by default) is defined as ‘STG_CASA.N_MIN_BALANCE_YTD < 9 and
STG_CASA.N_MIN_BALANCE_YTD > 99’. Here the base column data less than 9 and
greater than 99 are identified as invalid.
If the Inclusive checkbox is not selected for Minimum and Maximum, then the check is
defined as, ‘If STG_CASA.N_MIN_BALANCE_YTD <= 9 and
STG_CASA.N_MIN_BALANCE_YTD >= 99’. Here the base column data less than 10 and
greater than 98 are identified as invalid, where 9 and 99 are also included in the
validation and considered as invalid.
1. Select Enabled checkbox. This option is available only if the selected Base Column is
either of Date or Number data type.
Select the Severity as Error, Warning, or Information.
If the selected Base Column is of “Date” type, select Minimum and Maximum date range
using the Calendar. If the selected base column is of “Number” type, enter the Range
value. You can specify numeric, decimal, and negative values for number Data type. The
Inclusive checkbox is selected by default and you can deselect the same to include the
specified date/value during the validation check.
Click and specify an expression for Additional Condition using the Specify
Expression window. For more information, see Define Expression.
(Optional) If the Severity is set to Warning/Information:
2. Select the Assignment checkbox.
3. Select the Assignment Type from the drop-down list. For more information, see
Populating Assignment Type Details in the References section.
4. Specify the Assignment Value.
5. Select the Message Severity as 1 or 2 from the drop-down list.
6. Select the Message to be displayed from the drop-down list.
Data Length Data Length Check checks for the length of the base column data using a minimum and
Check maximum value and identifies if it falls outside the specified range.
Example: If the Base Table is STG_CASA, Base Column is N_MIN_BALANCE_YTD, the
Minimum value is 9, and the Maximum value is 12, then the check is defined as ‘If the
length of STG_CASA.N_MIN_BALANCE_YTD < 9 and > 12’. Here the base column data with
characters less than 9 and greater than 12 are identified as invalid.
Select Enabled checkbox.
Select the Severity as Error, Warning, or Information.
Specify the Minimum data length characters.
Specify the Maximum data length characters.
Click and specify an expression for Additional Condition using Specify Expression
window. For more information, see Define Expression.
Column Column Reference / Specific Value Check compares the base column data with another
Reference / column of the base table or with a specified direct value using the list of pre-defined
Specific Value operators.
Check Example: If the Base Table is STG_CASA, Base Column is N_MIN_BALANCE_YTD, and if
Column Reference check is defined against a specific value ‘100’ with the operator ‘>=’
then the check is defined as, ‘If STG_CASA.N_MIN_BALANCE_YTD < 100’. Here the base
column data with value less than 100 are considered as invalid.
Or, if Column Reference check is defined against another column N_MIN_BALANCE_MTD
with the operator ‘=’ then the check is defined as, ‘If STG_CASA.N_MIN_BALANCE_YTD <>
STG_CASA.N_MIN_BALANCE_MTD’. Here the reference column data not equal to the
base column data is considered as invalid.
Select Enabled checkbox. This option is available only if the selected Base Column is
either of Date or Number data type.
Select the Severity as Error, Warning, or Information.
Select the Mathematical Operator from the drop-down list.
Select the Filter Type as one of the following:
Select Specific Value and specify the Value. You can specify numeric, decimal, and
negative values for number Data type.
Select Another Column and select Column Name form the drop-down list.
Click and specify an expression for Additional Condition using Specify Expression
window. For more information, see Define Expression.
(Optional) If the Severity is set to Warning/Information:
Select the Assignment checkbox.
Select the Assignment Type from the drop-down list. For more information, see
Populating Assignment Type Details in Reference section.
Specify the Assignment Value.
Select the Message Severity from the drop-down list.
Select the Message from the drop-down list.
List of Value / List of Value / Code Check can be used to verify values where a dimension / master table
Code Check is not present. This check identifies if the base column data does not matches with any
value or code specified in a list of values.
Example: If the Base Table is STG_CASA, Base Column is N_MIN_BALANCE_YTD, and the
list of values is mentioned are “100, 101, 102, 103, 104”, then the check is defined as, ‘If
STG_CASA.N_MIN_BALANCE_YTD is NOT IN (‘100, 101, 102, 103, 104)’. Here the base
column data apart from the one specified (i.e. 100, 101, 102, 103, 104) are considered as
invalid.
Or, for Code Check,
If the Base Table is CURRENCY_MASTER, Base Column is COUNTRY_CODE, and the list of
values is mentioned are ‘IN’, ‘US’, ‘JP’, then the check is defined as, ‘If
CURRENCY_MASTER.COUNTRY_CODE is NOT IN (‘IN’, ‘US’, ‘JP’)’. Here the base column
data apart from the one specified (i.e. ‘IN’, ‘US’, ‘JP’) are considered as invalid.
Select Enabled checkbox.
Select the Severity as Error, Warning, or Information.
Select the Filter Type as one of the following:
Select Input Values and specify the List of Values. You can specify numeric, decimal,
string (Varchar /char), and negative values.
Select Code and click in the List of Values column. The Code Selection window is
displayed. Select the required code and click . You can also click to select all the
available codes. Click OK.
Click and specify an expression for Additional Condition using Specify Expression
window. For more information, see Define Expression.
(Optional) If the Severity is set to Warning or Information:
Select the Assignment checkbox.
Select the Assignment Type from the drop-down list. For more information, see
Populating Assignment Type Details in the References section.
Specify the Assignment Value.
Select the Message Severity from the drop-down list.
Select the Message from the drop-down list.
Null Value Check Null Value Check identifies if “NULL” is specified in the base column.
Example: If the Base Table is STG_CASA and the Base Column is N_MIN_BALANCE_YTD,
then the check is defined as, ‘If STG_CASA.N_MIN_BALANCE_YTD is NULL’. Here the base
column data, which is null, are considered as invalid.
Select Enabled checkbox.
Select the Severity as Error, Warning, or Information.
Click and specify an expression for Additional Condition using Specify Expression
window. For more information, see Define Expression.
(Optional) If the Severity is set to Warning or Information:
Select the Assignment checkbox.
Select the Assignment Type from the drop-down list. For more information, see
Populating Assignment Type Details in the References section.
Specify the Assignment Value.
Select the Message Severity from the drop-down list.
Select the Message from the drop-down list.
Note: The Null Check support TIMESTAMP datatype.
Blank Value Blank Value Check identifies if the base column is blank without any values considering
Check the blank space.
Example: If the Base Table is STG_CASA and Base Column is N_MIN_BALANCE_YTD, then
the check is defined as, ‘If Length of data of STG_CASA.N_MIN_BALANCE_YTD after trim
is null’. Here the base column data that is blank/empty are considered as invalid.
Select Enabled checkbox.
Select the Severity as Error, Warning, or Information.
Click and specify an expression for Additional Condition using Specify Expression
window. For more information, see Define Expression.
(Optional) If the Severity is set to Warning or Information:
Select the Assignment checkbox.
Select the Assignment Type from the drop-down list. For more information, see
Populating Assignment Type Details in the References section.
Specify the Assignment Value.
Select the Message Severity from the drop-down list.
Select the Message from the drop-down list.
Note: The Blank Check support TIMESTAMP datatype.
Referential Referential Integrity Check identifies all base column data which has not been referenced
Integrity Check by the selected column of the referenced table. Here, the reference table and columns are
user specified.
Example: If the Base Table is STG_CASA, Base Column is N_MIN_BALANCE_YTD,
Reference table is STG_CASA_TXNS, and reference column is N_TXN_AMOUNT_NCY,
then the check is defined as, ‘(not exists (select STG_CASA_TXNS.N_TXN_AMOUNT_NCY
from STG_CASA_TXNS where
STG_CASA_TXNS.N_TXN_AMOUNT_NCY=STG_CASA.n_min_
balance_ytd))’. Here, if the STG_CASA. N_MIN_BALANCE_YTD column value does not
match with STG_CASA_TXNS. N_TXN_AMOUNT_NCY, then those base table records are
considered as invalid.
This check can be used to validate attributes like Geography dimension, currency
dimension, and so on.
Select Enabled checkbox.
Select the Severity as Error, Warning, or Information.
Select the Table (Referential Integrity Check dimension table) from the drop-down list.
The base table selected under the Select grid is excluded from the drop-down list.
Select the Column from the drop-down list.
The list displays those columns that have the same Data Type as that of the Base Column
selected under Select grid.
Select the Is Composite Key checkbox if the base column is part of a Composite Key.
Enter the Additional Reference Condition for the Composite Key. For example,
baseTable.column2=refTable.column2 and baseTable.column3=refTable.column3 where
column1, column2, column3 are part of the Composite Keys, baseTable.column1 is the
base column and refTable.column1 is the reference column.
Click and specify an expression for Additional Condition using Specify Expression
window. For more information, see Define Expression.
Note: SELECT privilege should be granted to METADOM (atomic schema) user on Base
Table and Reference Table for all DQ rules which are defined on “Data Management
Sources”.
Duplicate Check Duplicate Check can be used when a combination of column is unique and identifies all
the duplicate data of the base table in terms of the columns selected for the duplicate
check.
Example: If the Base Table is STG_CASA, base column is N_MIN_BALANCE_YTD, and
duplicity columns are selected as N_MIN_BALANCE_MTD and N_MIN_BALANCE_ITD,
then the check is defined as, ‘If there are duplicate values for the combination of columns
STG_CASA. N_MIN_BALANCE_YTD, STG_CASA.N_MIN_BALANCE_MTD, and STG_CASA.
N_MIN_BALANCE_ITD are considered as invalid’.
Select Enabled checkbox.
Select the Severity as Error, Warning, or Information.
Click and specify an expression for Additional Condition using Specify Expression
window. For more information, see Define Expression.
Custom Custom Check/Business Check is a valid SQL query to identify the data with the query
Check/Business specified as the Custom/business SQL. You can define the SQL, but the Select clause of
Check the query has to follow the order as specified in the template of the Custom Check panel.
Example: When you want all the bad records based on two column selection from same
table, such as - Identify all the error records from Investments table where the account
number is not null and account group code is null:
• select PK_NAMES,PK_1,PK_2,PK_3,PK_4,PK_5,PK_6,PK_7,PK_8,ERROR_COLUMN
from (SELECT NULL PK_NAMES, NULL PK_1,NULL PK_2,NULL PK_3,NULL
PK_4,NULL PK_5,NULL PK_6,ACCOUNT_NUMBER PK_7, ACCOUNT_GROUP_CD
PK_8,1 ERROR_COLUMN FROM FSI_D_INVESTMENTS WHERE
ACCOUNT_GROUP_CD IS NULL AND ACCOUNT_NUMBER IS NOT NULL)
• Select Enabled checkbox.
• Select the Severity as Error, Warning, or Information.
• Enter the Custom/Business Check parameters within the brackets. Ensure that each
parameter is separated by a comma.
Note: Threshold check is performed based on the value set to Y for the following
parameter DQ_ENABLE_CUSTOM_THRESHOLD. By default, the value is N.
NOTE For all checks except Referential Integrity Check, the additional
condition is expected to be defined on the base table; whereas
for RI check, it can be done on the base table as well as the
reference table.
Generic Check is used to define conditions based on multiple columns of a single base table. These checks
are not pre-defined and can be specified (user-defined) as required.
If Generic Check is selected, do the following:
2. Click and define the Filter condition using the Specify Expression window.
For more information, see Define Expression.
NOTE While defining the filter condition, you can also include the
Runtime Parameter name which you would be specifying in
Additional Parameters condition while executing the DQ Rule.
The Expression is displayed with the “IF” and “Else” conditions along with the Severity status as
Error or Warning or Information.
You can change the Severity by selecting the checkbox corresponding to the condition and
selecting the Severity as Warning or Information from the drop-down list.
NOTE You can add an Assignment only when the Severity is selected
as Warning or Information. Assignments are added when you
want to correct or update record(s) in base column data /
selected column data. There can be one or more assignments
tagged to a single condition. However, selecting severity as
Error indicates there are no corrections and only facilitates in
reporting the quantity of bad records.
4. Select the checkbox adjacent to the required Condition expression and click Add in the
Assignment grid.
The assignment details are populated.
Table 24: Fields in the Generic Value pane and their Descriptions
Field Description
Column Name Select the Column Name from the drop-down list.
Assignment Type Select the Assignment Type from the drop-down list. For more
information, see Populating Assignment Type Details in the References
section.
Assignment Value Select the Assignment Value from the drop-down list according to the
Assignment Type selected.
Message Severity Select the Message Severity as either 1 or 2 from the drop-down list.
Message Select the required Message for the Severity from the drop-down list.
You can also add multiple assignments by clicking Add in Assignment grid.
6. Click Save.
The defined Data Quality Rule definition is displayed in the Data Quality Rule Summary window with
Status as “Saved” and Active as "N". After it is approved, it becomes active.
Using Control Total check, you can compare a constant reference value or reference entity against single
or multiple values obtained by applying aggregate functions on the columns of a master/main table, with
supporting dimensional filters. The dimensional filters can be time, currency, geography or so on.
There is no data correction configurable for the Control Total check. This check provides summary level
information on the entity used, attributes used, aggregate function applied, dimension-filters, group-by
columns/predicates selected, number of records subject to the check and so on.
Example of Control Total check based on Constant/Direct Value
Consider an example where you want to check the sum of loan amount for currency code ‘INR’ is greater
than or equal to a Constant Value. In the LHS, select Table as “stg_loan_transactions”, Dimensional Filter
as “dim_currency.n_currency_code=‘INR’“ and Group By as “dim_legal_entities.le_code, lob.lob_code,
dim_branch.branch_code, dim_prodcut.product_id”. In this case, the query for LHS Criteria will be
Select sum(end_of_period_balance)
from stg_loan_transactions SLT, dim_currency DC
where SLT.n_currency_skey=DC.n_currency_skey and DC.n_currency_code = ‘INR’ and
fic_mis_date = ‘12/12/2015’
group by dim_legal_entities.le_code, lob.lob_code, dim_branch.branch_code,
dim_prodcut.product_id”
If the result of the aggregate function is greater than or equal to the specified constant value, it will be
marked as Success, else Failure. After execution, the results can be viewed in DQ reports.
Example of Control Total check based on Reference Entity
Consider an example where you want to compare the sum of loan amount for currency code ‘INR’ with the
sum of transaction amount for currency code ‘INR’ for a period with MIS DATE as 12/12/2015. In the LHS,
select Table as “stg_loan_transactions”, Dimensional Filter as “dim_currency.n_currency_code=‘INR’“ and
Group By as “dim_legal_entities.le_code, lob.lob_code, dim_branch.branch_code,
dim_prodcut.product_id”. In the RHS, select Table as “gl_master”, Dimensional Filters as
“dim_currency.n_currency_code=‘INR’“ and fic_mis_date = 12/12/2015, and Group By as
“dim_legal_entities.le_code, lob.lob_code, dim_branch.branch_code, dim_prodcut.product_id”. In this case,
the query for LHS criteria will be same as given in the previous example and the query for RHS criteria will
be:
select sum(end_of_period_balance)
from gl_master GM, dim_currency DC, dim_time_date DTD
where GM.n_currency_skey = DC.n_currency_skey and GM.gl_code = ‘LES_001’ and
DTD.fic_mis_date = ‘12/12/2015’ and DC.n_currency_skey = ‘INR’
group by dim_legal_entities.le_code, dim_lob.lob_code, dim_branch.branch_code,
dim_prodcut.product_id
Consider you have selected the Operator as “>=”. Then, if the result of the aggregate function in the LHS is
greater than or equal to the result of the aggregate function in the RHS, it will be marked as Success, else
Failure. After execution, the results can be viewed in DQ reports.
If Control Total Check is selected, do the following:
2. Click and select the Identifier Columns from the Column Selection window.
The list displays all PK columns of the selected base table.
This feature allows you to view the DQ results report based on the selected identifier columns apart
from the PK columns. You can select up to 8 Identifier columns including the PK columns. It is
mandatory to select the PK Columns.
3. Click and define the Filter condition using the Specify Expression window.
For more information, see Define Expression.
NOTE While defining the filter condition, you can also include the
Runtime Parameter name which you would be specifying in
Additional Parameters condition while executing the DQ Rule.
Field Description
Aggregate Expression
Click and define the Aggregate Expression using the Specify
Expression window. For more information, see Define Expression.
Additional Entities
Click and add additional entities if required from the Additional
Entities Selection window. This is optional.
ANSI Join Condition Specify ANSI Join condition if you have added Additional Entities.
For DQ rules defined on source, prefix the table names with “$SOURCE$”
if you are directly entering the ANSI Join Condition in the Expression
editor.
Join Condition Specify Join condition if you have added Additional Entities.
Group By
Specify the group by predicates/ columns by clicking and selecting
Table and Column from the respective drop-down lists.
Note: The group-by columns need not match the filter criteria columns in
the where clause of LHS. If Group By columns are not selected on LHS
and RHS, a single row on LHS will be compared with a single row on RHS.
Group By Join Condition Specify the Group By Join condition in the form LHS.GRPBY_COL1 =
RHS.GRPBY_COL1 AND LHS.GRPBY_COL2 = RHS.GRPBY_COL2 and so
on. LHS and RHS will be joined based on this.
If the number of Group By columns on LHS does not match with the
number of Group By columns on RHS, it is mandatory to enter Group By
Join Condition.
If Group By Join Condition is not specified and the number of Group By
columns on LHS and RHS are equal, Group By Join Condition will be
automatically generated in the form “LHS.GRPBY_COL1 =
RHS.GRPBY_COL1 AND LHS.GRPBY_COL2 = RHS.GRPBY_COL2”.
If Group By columns are present only on LHS, every row on LHS will be
compared against the single row on RHS. Group By Join Condition will be
generated in the form “RHS.R_ID=1”.
If Group By columns are present only on RHS, the single row in LHS will
be compared against every row on RHS. Group By Join Condition will be
generated in the form “LHS.L_ID=1”.
6. Select the appropriate Operator from the drop-down list. The available operators are >, <, =, <>, <=,
and >=. Evaluation is done based on the selected numeric operator.
7. Select the Reference Type as:
Direct Value- Enter the reference value in the Value field.
Another Entity- This is used when you want to compare LHS with a different entity with its set
of attributes. Enter the details as follows:
Reference Base Table- Select the reference table from the drop-down list.
Data correction on partitioned table is accomplished by overwriting the particular partition specified. At
run time, DQ engine look for partition information from OFSAAI object registration table
REV_TAB_PARTITION. If base table is partitioned, REV_TAB_PARTITIONS table will have partition column,
value, and sequence registered in it.
1. From the Data Quality Rules window, select the Record Status as Inactive and click Search. All
inactive definitions are displayed.
2. Select the required definition and click Make Latest. The selected definition becomes active and
the current active definition becomes inactive.
You can update all the definition details except for the Definition Name, Check Type, Table, and the Base
Column selected. To update the required Data Quality Rule definition details in the Data Quality Rule
Summary window:
1. Select the checkbox adjacent to the required DQ Name.
NOTE You can only edit those rules, which has the status as Saved or
Rejected and which are Approved (but not mapped with any
group). If you want to edit an Executed rule, you need to
unmap the rule from the group.
2. Click Edit from the Data Quality Rules tool bar. The Edit button is disabled if you have selected
multiple DQ Names.
The Data Quality Definition (Edit Mode) window is displayed.
3. Update the details as required. For more information, see Create Data Quality Rule.
4. Click Save and update the changes. The Status is changed to Saved and it will be inactive. The rule
should undergo authorization to become active. If you are mapped to the DQAUTOAUTHR role, the
definition is automatically authorized and it becomes active.
2. Click Copy from the tool bar. The Copy button is disabled if you have selected multiple
checkboxes.
The Data Quality Definition (Copy Mode) window is displayed.
3. Edit the DQ definition Name and other details as required.
For more information, see Create Data Quality Rule.
4. Click Save. The defined Data Quality Rule definition is displayed in the Data Quality Rule Summary
window with the status as “Saved”.
The Approved/Rejected status of the DQ definition is indicated in the Status column of the Data
Quality Rule Summary window. You can mouse-over to view the Approver comments in a pop-
up.
2. Click Delete button from the Data Quality Rules tool bar.
3. Click OK in the information dialog to confirm deletion.
The Data Quality Groups Summary window displays the list of pre-defined Data Quality Groups with the
other details such as Name, Folder, Creation Date, Created By, Last Modification Date, Last Modified By,
Last Run Date, and Last Run Status. You can create and execute DQ Group definitions and view, modify,
copy, refresh, or delete DQ Group definitions within the Data Quality Groups Summary window.
You can also search for a DQ Group definition based on Name, Description, Folder, Rule Name, On Source,
or Source.
1. From the Data Quality Groups Summary window, click Add button in the Data Quality Groups
tool bar. Add button is disabled if you have selected any checkbox in the grid.
The Data Quality Group Definition window is displayed.
Select the Folder (available for selected Information Domain) from the drop-down list.
3. In the Map DQ Rules section, do the following:
Select the required DQ Rule from the Available Rules list and click . You can also search to
select a specific DQ Rule by entering the required keyword and clicking button.
You can also deselect a DQ Rule by selecting from the Mapped Rules list and clicking or deselect
all the mapped rules by clicking . You can search to deselect a specific DQ Rule by entering the
keyword and clicking button.
4. Click Save. The defined DQ group is listed in the Data Quality Rule Summary window and can be
executed for processing.
For more information, see Executing Data Quality Group.
Note that the results of execution of Data Quality Rules are stored in the table
DQ_RESULT_DETL_MASTER of respective METADOM schema. During the OFSAAI installation ensure the
Oracle database tablespace in which this table resides is configured to AUTOEXTEND ON. Otherwise, the
DQ Rule executions might result in error due to insufficient storage space available (ORA-01653 - Unable
to extend tablespace by 1024). To mitigate this error, ensure sufficient storage for the tablespace has been
allocated. For a single check (DQ) on a row of data, the table DQ_RESULT_DETL_MASTER stores the
results in 1 row. Thus, for 2 checks on a row, the table would store results in 2 rows and so on.
A provision to Run DQ Rules in a DQ Group in parallel is introduced. There are two parameters
DQ_ENABLE_PARALLEL_EXEC and DQ_MAX_NO_OF_EXEC_THREADS added in the CONFIGURATION
table. If DQ_ENABLE_PARALLEL_EXEC parameter is set to 'Y', DQ rules within the group are executed in
parallel. DQ_MAX_NO_OF_EXEC_THREADS can be used to specify the number of rules which should be
Run, simultaneously.
If DQ_ENABLE_PARALLEL_EXEC parameter is set to 'N' or is not present, rules within the group are
executed sequentially.
2. Click Run button from the Data Quality Groups tool bar. The Run button is disabled if you have
selected multiple checkboxes.
The Group Execution window is displayed.
Specify the percentage of Threshold (%) limit in numeric value. This refers to the maximum
percentage of records that can be rejected in a job. If the percentage of failed records exceeds
the Rejection Threshold, the job will fail. If the field is left blank, the default value is set to 100%.
Specify the Additional Parameters as filtering criteria for execution in the pattern Key#Data
type#Value; Key#Data type#Value; and so on.
Here the Datatype of the value should be “V” for Varchar/Char, or “D”’ for Date with
“MM/DD/YYYY” format, or “N” for numeric data. For example, if you want to filter some specific
region codes, you can specify the Additional Parameters value as
$REGION_CODE#V#US;$CREATION_DATE#D#07/06/1983;$ACCOUNT_BAL#N#10000.50;
NOTE In case the Additional Parameters are not specified, the default
value is taken as NULL. Except the standard place holders
$MISDATE and $RUNSKEY, all additional parameters for DQ
execution should be mentioned in single quotes. For example,
STG_EMPLOYEE.EMP_CODE = '$EMPCODE'.
Select Yes or No from the Fail if Threshold Breaches drop-down list. If Yes is selected,
execution of the task fails if the threshold value is breached. If No is selected, execution of the
task continues.
For executing DQ rules on Spark, specify ‘EXECUTION_VENUE=Spark’ in the Optional
Parameters field. Before execution, you should have registered a cluster from DMT
Configurations > Register Cluster window with the following details:
Name- Enter name of the Hive information domain.
Description- Enter a description for the cluster.
Livy Service URL- Enter the Livy Service URL used to connect to Spark from OFSAA.
4. Click Execute.
A confirmation message is displayed and the DQ Group is scheduled for execution.
After the DQ Group is executed, you can view the details of the execution along with the log
information in the View Log window.
For more information, see Viewing Data Quality Group Summary Log.
To view the existing DQ Group definition in the Data Quality Group Summary window:
1. From the Data Quality Groups Summary window, select the checkbox adjacent to the required DQ
Group Name.
The mapped DQ Rules are displayed in the Data Quality Rules grid.
2. Click View button from the Data Quality Groups tool bar.
The Data Quality Group Definition window displays the DQ Group definition details and the mapped
DQ rules.
2. Click Edit button from the Data Quality Groups tool bar.
The Edit - DQ Group - DQ Definition Mapping window is displayed.
3. Update the details as required.
For more information, see Creating Data Quality Group.
4. Click Save and update the changes.
2. Click Copy button from the toolbar. Copy button is disabled if you have selected multiple
checkboxes.
The Copy - DQ Group - DQ Definition Mapping window is displayed.
3. Edit the DQ Group Name and other details as required.
For more information, see Create Data Quality Group.
4. Click Save.
The new DQ Group definition is displayed in the Data Quality Groups Summary window.
2. Click the link in Last Run Status column corresponding to the required Data Quality Rule.
Or
Select the required Data Quality Rule and click View Log from the Data Quality Rules toolbar.
The View Log window is displayed with the latest execution data pertaining to Data Quality Rule
selected.
Select the Information Date from the drop-down list. Based on selection, you can select the
Group Run ID and Iteration ID from the corresponding drop-down lists.
Click View Log button from the Group Execution Details toolbar. The Data Quality Rule Log
grid displays the execution details of the selected Data Quality Rule. You can also click Reset
button in the Group Execution Details toolbar to reset the selection.
NOTE If you have opted to run T2T with data correction, then the data
quality checking is done in the source and the Data Quality
Report generated is only a preview report of the actual
execution. That is, though the execution may have failed, you
can view Data Quality report.
For Control Total Check type, the Data Quality Detailed Report displays Subject Reference Value, Operator,
Aggregate Reference Value, Group By columns, Aggregate Row Status and Rows Impacted.
2. Click Delete button from the Data Quality Groups tool bar.
3. Click OK in the information dialog to confirm deletion.
You can define any optimization statement inside the preScriptDQDC.conf file as stated below:
1. Statement starting with #, will be ignored as it is considered as comments.
2. Statement with Key Words like CREATE, TRUNCATE, DROP, SELECT, and UPDATE will be ignored.
3. Different statements should be separated either by ; or new line.
4. Accepted/Filtered statements will be executed and can be seen in the log with execution status as
SUCCESS/FAILURE.
5. If unable to execute optimization statements or if file is not present in the respective path, log will
show the message, but DCDQ will not fail. It will continue with the execution.
4.11 References
This section of the document consists of information related to intermediate actions that needs to be
performed while completing a task. The procedures are common to all the sections and are referenced
where ever required. You can see to the following sections based on your need.
4.11.2 RDBMS
RDBMS or relational database management system stores data in the form of tables along with the
relationships of each data component. The data can be accessed or reassembled in many different ways
without having to change the table forms.
RDBMS data source lets you define the RDBMS engine present locally or on a remote server using the FTP
access. RDBMS can be defined to connect to any of the RDBMS such as Oracle, Sybase, IBM DB2, MS SQL
Server, and any RDBMS through native connectivity drivers.
A separate license is required for third party jars and the client has to procure it.
4.11.3 RAC
Real Application Clusters (RAC) allows multiple computers to Run RDBMS software, simultaneously, while
accessing a single database and providing a clustered database.
In an Oracle RAC environment, two or more computers (each with an instance) concurrently access a
single database. This allows an application or user to connect to either of the computer and have access to
a single coordinated set of data. RAC addresses areas such as fault tolerance, load balancing, and
scalability.
• Functions – This is divided as Database Functions and User Defined Functions. Database Functions
consists of functions that are specific to databases like Oracle and MS SQL Server. You can use
these functions along with Operators to specify the join condition.
The Functions categories are displayed based on the database types as tabulated.
Database Functions
Transact SQL Specific to MS SQL server which consists of Date and Time, Math, and
System functions.
SQL Specific to Oracle which consists of String, Aggregate, Date and Time,
and Mathematical functions.
Operator Types
Arithmetic +, -, %, * and /
Comparison '=', '!=', '< >', '>', '<', >=, <=,'IN', 'NOT IN', 'ANY', 'BETWEEN', 'LIKE',
'IS NULL', and 'IS NOT NULL'.
Other The Other operators are 'PRIOR', '(+)', '(' and ')'.
NOTE The aforementioned parameters are not supported for T2T and
F2T.
The T2T/L2H/H2H/T2H/H2T/F2H/T2F Mappings also support the following Parameters in the v8.1.2.1.0
and later versions to get MISDATE as a number:
• $MISDT_SKEY - where the Data Type is Integer and can be mapped to NUMBER. The value is
MISDATE represented as a number.
Two additional parameters are also supported for L2H mappings:
• [INCREMENTALLOAD] – Specify the value as TRUE/FALSE. If set to TRUE, historically loaded data
files will not be loaded again (load history is checked against the definition name, source name,
target Infodom, target table name and the file name combination). If set to FALSE, the execution is
similar to a snapshot load, and everything from the source folder/file will be loaded irrespective of
load history.
• [FOLDERNAME] – Value provided will be used to pick up the data folder to be loaded.
For HDFS based Weblog source: Value will be suffixed to HDFS File Path specified during the
source creation.
For Local File System based Weblog source: By default, the system will look for execution date
folder (MISDATE: yyyymmdd) under STAGE/<source name>. If the user has specified the
FOLDERNAME for this source, system will ignore the MISDATE folder and look for the directory
provided as [FOLDERNAME].
Passing values to the Runtime Parameters from the RRF module
Values for $Parameters are implicitly passed through RRF
Values for dynamic parameters (given in Square Brackets) need to be passed explicitly as:
"PARAM1","param1Value", “PARAM2”, “param2Value"
Passing values to the Runtime Parameters from the Operations module
Value for $MISDATE is passed implicitly from ICC
Value for other $parameters and dynamic parameters (given in Square Brackets) is passed as:
[PARAM] = param1VALUE , $RUNSK = VALUE
• Code: If any code / leaf values exist for the selected base column, select the required Code as
Assigned Value from the drop-down list. If not, you are alerted with a message indicating that No
Code values exist for the selected base column.
• Expression: Click button in the Assignment Value column and specify an expression using
Specify Expression window. For more information, see Specify Expression.
5.1 Alias
Alias refers to an assumed name or pseudonym. Alias section within the Infrastructure system facilitates
you to define an Alias for a table and specify the join condition between fact and dimension table. Alias
defined to a table help you to query data for varied analytical requirements.
The roles mapped to Alias module are as follows:
• Alias Access
• Alias Advanced
• Alias Authorize
• Alias Phantom
• Alias Read Only
• Alias Write
For all the roles and descriptions, see Appendix A.
The Alias Summary window displays the Alias name of the selected Entity. You can also add a new Alias,
view the Alias details and delete an existing Alias. Click the Column header names to sort the column
names in ascending or descending order. Click if you want to retain your user preferences so that
when you login next time, the column names will be sorted in the same way. To reset the user preferences,
click .
The Alias Details grid in the Add Alias window displays the entity name you have selected in a non-
editable field.
2. Enter the Alias name you wish to provide for the selected entity in the Alias Name field.
3. Click Save. The Alias name is listed under the Aliases grid for the selected entity.
The User Info section at the bottom of Add Alias window displays metadata information about the Alias
Name created. The User Comments section facilitates you to add or update additional information as
comments.
Select an Entity from the drop-down list whose Alias details you want to view and click View. The
View Details window is displayed.
The User Info grid at the bottom of the window displays the metadata information about the Alias
definition along with the option to add comments.
4. Select an Entity from the drop-down list, whose Alias you want to delete and click Delete from
the Aliases tool bar.
5. Click OK in the warning dialog to confirm deletion.
The selected Alias names are removed.
The Derived Entity Summary window displays the list of pre-defined Derived Entities with their Code, Short
Description, Long Description, Creation Date, Source Type, and Materialize View status. By clicking the
Column header names, you can sort the column names in ascending or descending order. Click if you
want to retain your user preferences so that when you login next time, the column names will be sorted in
the same way. To reset the user preferences, click .
You can add, view, edit, copy, and delete a Derived Entity. You can search for a specific Derived Entity
based on the Code, Short Description, Source Type, and Authorization status.
Based on the role that you are mapped to, you can access, read, modify or authorize Derived Entity. For all
the roles and descriptions, see Appendix A. The roles mapped to Derived Entity are as follows:
• Derived Entity Access
• Derived Entity Advanced
• Derived Entity Authorize
• Derived Entity Phantom
• Derived Entity Read Only
• Derived Entity Write
Union Participating Metadata present in Final physicalized materialized view for union based
Based DEs participating DEs DE
DE
UN001 DE001 MSR MSR MSR MSR001 MSR002 MSR003 MSR004 MSR005
001 002 003
In case of Union All based definition, the resultant materialized view in database may have repetition of
data based on data present in the participating Derived Entities.
You can approve a Derived Entity created by other users if you have the authorizer rights. You need to be
mapped to the role Derived Entity Write to add or create a Derived Entity.
Partitioning is supported for Dataset based Derived Entities which have partitions enabled on the FACT
table.
To create a Derived Entity:
1. Click Add from the Derived Entity toolbar. The Derived Entity Details window is displayed.
Field Description
Code Enter a distinct code to identify the Derived Entity. Ensure that the code is
alphanumeric with a maximum of 8 characters in length and there are no
special characters except underscore “_”.
Note the following:
The code can be indicative of the type of Derived Entity being created.
A pre-defined Code and Short Description cannot be changed.
Same Code or Short Description cannot be used for Essbase installation:
“$$$UNIVERSE$$$”, “#MISSING”, “#MI”, “CALC”, “DIM”, “ALL”, “FIX”,
“ENDFIX”, “HISTORY”, “YEAR”, “SEASON”, “PERIOD”, “QUARTER”,
“MONTH”, “WEEK”, “DAY”.
Short Description Enter a Short Description based on the defined code. Ensure that the
description is of a maximum of 80 characters in length and does not contain
any special characters except “_, ( ), -, $”.
Field Description
Long Description Enter the Long Description if you are creating subject-oriented Derived
Entity to help users for whom the Derived Entity is being created or other
details about the type/subject. Ensure that the description is of a maximum
of 100 characters in length.
Source Type Select the source type from the drop-down list. The options are Dataset,
Entity, Union and Union All. The Union and Union All options are used to
create a Derived Entity by combining 2 or more existing Derived Entities.
Materialize View Turn ON the Materialize View toggle button if you are using Oracle
database to create a Materialized View with the Derived Entity Name and
short description.
Note: You cannot enable the Materialize View option if you are using IBM
DB2 database.
Dataset Name This field is enabled only if the Source Type is selected as Dataset.
Select the Dataset Name from the drop-down list. The Short Description for
the Datasets is available in the drop-down list to select.
Source Name This field is enabled only if the Source Type is selected as Entity.
Select the Source Name from the drop-down list.
Refresh Interval This field is enabled only if the Materialize View checkbox is selected.
Select the appropriate refresh interval from the drop-down list, The options
are:
None- Only materialized view will be created. If you select None for Refresh
Interval, it is mandatory to select None for Refresh Method.
Demand- The refresh of the Materialized View is initiated by a manual
request or a scheduled task.
Commit- The refresh is triggered by a committed data change in one of the
dependent tables.
Refresh Method This field is enabled only if the Materialize View checkbox is selected.
Select the appropriate refresh method from the drop-down list, The options
are:
None- Only materialized view will be created. If you have selected None for
Refresh Interval, it is mandatory to select None for Refresh Method.
Complete- This recreates the materialized view replacing the existing data.
This can be a very time-consuming process, especially if there are huge
amounts of data to be read and processed.
Fast- Applies the incremental changes to refresh the materialized view. If
materialized view logs are not present against the source tables in advance,
the creation fails.
Force- A fast refresh is attempted. If it is not possible, it applies Complete
refresh.
Note: Refresh Methods Fast and Commit do not work if the query has some
ANSI Join conditions.
Field Description
Enable Query Rewrite This toggle button is enabled only if the Materialize View toggle button is
turned ON.
Turn ON the toggle button if you want to create materialized view with the
query rewrite option.
Parallelism
Hint Specify Hints (if any), for optimized execution of query. The specified hints
are appended to the underlying query of the derived entity.
Oracle hints follow (/*+ HINT */) format.
For example, /*+ PARALLEL */.
Prebuilt Table This toggle button is enabled only if the Materialize View toggle button is
turned ON and Source Type is selected as Dataset.
Turn ON the toggle button to enable partition for the Derived Entity.
On selecting the Dataset Name or Source Application Name, the respective fields are displayed in
the Metadata for Source Type list.
3. Double-click Metadata for Source Type.
For Source Type selected as Dataset, the Metadata for Source Type displays all Hierarchies
and Measures defined on the Entities that are part of the selected Dataset, and Business
processors defined on the selected Datasets.
For Source Type selected as Entity, it displays all Entities in the selected DI Source.
For Source Type selected as Union or Union All, it displays all Derived Entities created with
Source Type as Dataset. You can select maximum of 15 Derived Entities.
4. Click to expand the folders. Select the required metadata and click . Click to select all
metadata. You can select a metadata and click to remove that metadata or click to remove
all selected metadata.
5. Select the hierarchy for which you want to add partition from the Partition drop-down list. This field
is enabled only if the Materialize View toggle button is turned ON and Source Type is selected as
Dataset. This drop-down lists the Hierarchies you selected as Metadata for Source Type.
6. Click Save.
A confirmation dialog is displayed.
The details are displayed in the Derived Entity Summary window.
1. From the Derived Entity Summary window, select the Derived Entity for which you want to add
partition values and click Partitions. The Partition Details window is displayed.
1. From the Derived Entity Summary window, select the derived entity you want to copy and click
Copy. The Derived Entity Details window is displayed.
2. Enter the required details.
For more information, see Creating Derived Entity section.
3. Click Save.
1. From the Derived Entity Summary window, select the derived entity you want to view and click
View. The Derived Entity Details window is displayed.
The View Derived Entity Details window displays the details of the selected Derived Entity definition.
The User Info grid at the bottom of the window displays the metadata information about the
Derived Entity definition created along with the option to add comments.
2. Click Close.
• A Derived Entity definition marked for deletion is not accessible for other users.
• Every delete action has to be Authorized/Rejected by the authorizer.
On Authorization, the Derived Entity details are removed.
On Rejection, the Derived Entity details are reverted back to authorized state.
• You cannot update Derived Entity details before authorizing/rejecting the deletion.
• An unauthorized Derived Entity definition can be deleted.
To delete a Derived Entity in the Derived Entity window:
1. From the Derived Entity Summary window, select the derived entity you want to delete and click
Delete.
2. Click OK in the confirmation dialog.
5.3 Datasets
Dataset refers to a group of tables whose inter-relationship is defined by specifying a join condition
between the various tables. It is a basic building block to create a query and execute on a data warehouse
for a large number of functions and to generate reports.
Dataset function within the Infrastructure system facilitates you to create Datasets and specify rules that
fine-tune the information for querying, reporting, and analysis. Datasets enhances query time by pre-
defining the names of tables required for an operation (such as aggregation), and also provides the ability
to optimize the execution of multiple queries on the same table set. For more information, see Scenario to
Understand the Dataset Functionality section.
The Datasets window displays the list of pre-defined Datasets with their Code, Short Description and Long
Description. You can add, view, edit, copy, and delete the required Dataset. You can also search for a
specific dataset based on the Code, Short Description, and Authorization status or view the list of existing
datasets within the system.
By clicking the Column header names, you can sort the column names in ascending or descending order.
Click if you want to retain your user preferences so that when you login next time, the column names
will be sorted in the same way. To reset the user preferences, click .
Based on the role that you are mapped to, you can access read, modify or authorize Datasets. For all the
roles and descriptions, see Appendix A. The roles mapped to Datasets are as follows:
• Dataset Access
• Dataset Advanced
• Dataset Authorize
• Dataset Phantom
• Dataset Read Only
• Dataset Write
1. From the Dataset Summary window, click Add from the Datasets tool bar.
The Dataset Details window is displayed.
Table 30: Fields in the Dataset Summary window and their Description
Field Description
Enter a distinct code to identify the Dataset. Ensure that the code is
alphanumeric with a maximum of 8 characters in length and there are no
special characters except underscore “_”.
Note the following:
The code can be indicative of the type of Dataset being created.
Enter a Short Description based on the defined code. Ensure that the
Short Description description is of a maximum of 8 characters in length and does not contain
any special characters except underscore “_”.
To remove an entity, select the entity from the Selected Values grid and click .
The following table describes the fields in the Dataset Definition pane.
Table 31: Fields in the Dataset Definition pane and their Descriptions
Field Description
The ANSI Join condition defines which set of data have been joined along
with the type of join condition. It also describes the exact operations to be
ANSI Join
performed while joining the Datasets. In ANSI join, the join logic is clearly
separated from the filtering criteria.
The Date Filter condition enables you to cascade the cubes that are using
Date Filter
the Dataset with the defined Date Filter.
The Order By condition enables you to sort the dimension data in order.
The order of the Dimension nodes will be maintained only for Business
Order By
Intelligence enabled hierarchies. The Order By condition is specific to the
Essbase database.
5. Enter the required expression or click to define an expression using the Expression Builder
window.
For more information, see Expression Builder.
6. Click Preview.
The Data of Dataset <<dataset name>> window is displayed.
This window displays an error message if the Query execution fails. Up to 400 records of data is
displayed in the Summary Grid pane.
7. Click Show Query to view the query.
8. Enter the values for MIS DATE (YYYYMMDD) and RUN SKEY parameters.
9. Click Save and save the Dataset Definition details.
• Dimension Write
Object Security
• This is implemented for Hierarchy, Filter, and Expressions objects.
• There are some seeded user groups and seeded user roles mapped to those user groups. If you are
using the seeded user groups, the restriction on accessing objects based on user groups is
explained in the OFSAA Seeded Security section.
• For creating/editing/copying/removing an object in Dimension Management module, your user
group should have been mapped to the folder in case of public or shared folder, or you should have
been the owner of the folder in case of private folder. Additionally, the WRITE role should be
mapped to your user group. For more information, see Object Security in OFSAAI section.
• To access the link and the Summary window, your user group should have ACCESS role mapped.
You can view all objects created in Public folders, Shared folders to which you are mapped and
Private folders for which you are the owner. For more information, see the Object Security in
OFSAAI section.
• The Folder selector window behavior and consumption of higher objects are explained in User
Scope section.
5.4.2 Attributes
Attributes refers to the distinguished properties or qualifiers that describes a dimension member.
Attributes may or may not exist for a simple dimension. Attributes section is available within the
Dimension Management section of Financial Services Applications module.
The Attributes window displays the list of pre-defined Dimension Attributes with the other details such as
the Numeric Code, Name, Data Type, Required, and Seeded. You can search for a specific Attribute based
on Numeric Code, Name, or Data Type and view the list of existing definitions within the system.
2. In the Dimension section, select the required dimension from the drop-down list.
NOTE Name: The characters ' " & ( ) % , ! / -are restricted in the name
field.
Description: The characters ~&+' "@ are restricted in the
description field.
Field Description
Select the Data Type as DATE, DIMENSION, NUMBER, or STRING from the
drop-down list.
If NUMBER is selected as the Data Type:
The Scale field is enabled with “0” as default value.
Type
Enter a Scale value >= 0. If it is left as 0, values for this attribute will be
limited to Integers. If you wish to enable decimal entries for this attribute,
the maximum Scale value must be > 0 and <= the scale defined for
NUMBER_ASSIGN_VALUE in the dimension's underlying attribute table. See
the Data Model Utilities Guide for further details on the attribute table.
Select Yes or No. If this is set to No, an attribute value is optional for the
associated dimension members.
Required Attribute
Note: This field is disabled in Add and Edit modes if any members already
exist for the Dimension upon which this attribute is defined.
Field Description
Click button to select a valid date as the Default Value from the
calendar.
If STRING is selected as the Data Type:
Enter alphanumeric value in the Default Value field.
The Maximum characters allowed in Default value field for
String Data Type is 1000.
6. Click Save.
The entries are validated and the defined Attribute is captured.
2. Click Edit button in the Dimension Attribute tool bar. Edit button is disabled if you have selected
multiple Attributes.
The Edit - Attributes window is displayed.
3. Edit the Attribute details such as Name, Description, or Default value.
For more information, see Add Attribute Definition.
4. Click Save to save the changes.
2. Click Copy button in the Dimension Attributes toolbar to copy a selected Attribute definition.
Copy button is disabled if you have selected multiple Attributes.
3. In the Copy – Attributes window you can:
Create new attribute definition with existing variables. Specify new Numeric Code and
Attribute Name. Click Save.
Create new attribute definition by updating the required variables. Specify new Numeric Code
and Attribute Name. Update the required details. For more information, see Add Attribute
Definition. Click Save.
The new attribute definition details are displayed in the Attributes window.
5.4.3 Members
Dimension Members refer to the individual items that constitute a dimension when data is categorized
into a single object. Example, Product, Organization, Time, and so on. Members are available within
Dimension Management section of the Infrastructure system.
For more information on how to set up alphanumeric and numeric codes, see Configurations to use
Alphanumeric and Numeric Codes for Dimension Members section in OFSAAI Administration Guide.
The Members window displays the list of pre-defined Dimension Members with the other details such as
the Alphanumeric Code, Numeric Code, Name, and Is Leaf. You can also search for a specific Member
based on Alphanumeric / Numeric Code (irrespective of whether dimension is configured to be numeric or
alphanumeric), Name, Description, Enabled status, Is Leaf status, Attribute Name, or Attribute Value and
view the list of existing definitions within the system.
2. In the Dimensions section, select the required Dimension from the drop-down list.
3. Enter the Member Details as tabulated:
The following table describes the fields in the Member Add window.
Table 33: Fields in the Members Add window Field and their Descriptions
Field Description
Field Description
This field is set to Yes by default and is editable only in Edit window.
Note: You can change the option to No only when the particular
Enabled member is not used in any hierarchy. The disabled members will not be
displayed in Hierarchy rules, or UIs which are based on Hierarchies,
such as Hierarchy Filters and hierarchical assumption browsers used in
applications.
Click button in the Search grid to search for a specific Member based on Alphanumeric
Code, Numeric Code, Name, Description, Enabled status, Is Leaf status, Attribute Name, or
Attribute Value. You can also click button to find a member present in the Dimension
Members grid using key words.
Click OK.
The selected Member is displayed in the Copy Attribute Assignment From field in New –
Member Details window and the details of selected Attribute are displayed in the Member
Attributes section. You can edit the Attribute details as indicated:
Edit Attribute based on date by clicking the (Calendar) icon.
Edit Attribute based on Dimension Value by selecting from the drop-down list.
Edit Attribute based on Number Value by entering the valid numerical value.
Edit Attribute based on String Value by specifying alphanumerical value.
5. Click Save and the defined Member Definition is captured after validating the entries.
1. Select the checkbox adjacent to the Alphanumeric Code of the Member, whose dependency is to be
viewed.
You can view the Number of Customers (Measure) across Income Group (Dimension), which is further
broken down by different age groups (Hierarchy). While number of customers is a metric, it is useful when
viewed based on some categorization such as customer income profile or customers having an annual
income of over USD 100,000 per annum, to provide better quality of information.
The Business Hierarchy window displays the list of pre-defined Business Hierarchies with their Code, Short
Description, Long Description, Hierarchy Type, Hierarchy Sub Type, Entity, and Attribute. You can create
Business Hierarchies for measure(s), and view, edit, copy, or delete the required Business Hierarchies. For
more information on the Business Hierarchy Types and Sub-types, see Business Hierarchy Types.
You can also search for a specific Business Hierarchy based on the Code, Short Description, Hierarchy
Type, Hierarchy Sub Type, and Authorization status, or view the list of existing Business Hierarchies within
the system.
Field Description
Enter a distinct code to identify the Hierarchy. Ensure that the code is
alphanumeric with a maximum of 8 characters in length and there are no
special characters except underscore “_”.
Note the following:
The code can be indicative of the type of Hierarchy being created.
Enter a Short Description based on the defined code. Ensure that the
Short Description description is of a maximum of 8 characters in length.
Note: The characters ' " & ( ) % , ! / are restricted.
3. In the Business Hierarchy Definition section, select the Hierarchy Type from the drop-down list.
You can select the following Hierarchy Type/Sub-Type. Click on the links to navigate to the
respective sections and define the required Hierarchy. For detailed information on all the Hierarchy
Types, see Business Hierarchy Types.
In a Regular Hierarchy Type, you can define the following Hierarchy Sub
Types:
Non Business Intelligence Enabled
In a non-Business Intelligence Enabled Hierarchy, you need to manually add
the required levels. The levels defined will form the Hierarchy.
Business Intelligence Enabled
You can Enable Business Intelligence hierarchy when you are not sure of the
Hierarchy structure leaf values or the information is volatile and also when
the Hierarchy structure can be directly selected from RDBMS columns. The
system will automatically detect the values based on the actual data.
In a BI enabled Hierarchy, you will be prompted to specify if a Total node is
Regular required (not mandatory) and system auto-detects the values based on
actual data. For example, you can define three levels in BI Enabled
hierarchies like, Region (1), State (2), and Place (3). The auto generated
Hierarchies are:
Parent Child
This option can be selected to define a Parent Child Type hierarchy.
A Measure Hierarchy consists of the defined measure as nodes and has only
Measure
the Non Business Intelligence Enabled as Hierarchy Sub Type.
NOTE When the defined Hierarchy consists of more than 100 leaf
levels, the system treats it as a Large Hierarchy in order to
provide efficient and optimized hierarchy handling. For more
information on modify the default value, see Large Hierarchy.
After you have populated the required details in Business Hierarchy Definition and Hierarchy details
section, save the details.
4. Click Save in Add Business Hierarchy window and save the details.
To update the required Business Hierarchy details in the Business Hierarchy window:
1. Select the checkbox adjacent to the required Business Hierarchy code.
The Hierarchies window displays the list of Hierarchies created in all public folders, shared folders to which
you are mapped and private folders for which you are the owner, along with other details such as the
Name, Display level, Created By, Creation Date, and Last Modification Date. For more information on how
object access is restricted, see Object Security in AMHM module section.
You can also search for a specific Hierarchy definition based on Folder, Hierarchy Name, Dimension
Member Alphanumeric Code, Dimension Member Numeric Code, or Dimension Member Name and view
the existing definitions within the system.
Field Description
Select the folder where the hierarchy is to be stored from the drop-down
list.
The Folder selector window behavior is explained in User Scope section.
Folder Click to create a new private folder. The Segment Maintenance window
is displayed. For more information, see Segment Maintenance.
Note: You can select Segment/Folder Type as Private and the Owner
Code as your user code only.
Click Yes to inherit the hierarchy properties of the Parent to the Child.
Automatic Inheritance
Click No if you want to define a new hierarchy.
Click Yes to display the Signage to the right hand side of the member in
Display Signage
the Show hierarchy panel. Else, click No.
Initial Display Level Select the Initial Display level from the drop-down list.
Click Yes to display the Orphan Branch in the Show Hierarchy panel.
Orphan Branch
Otherwise, click No.
c. Select the required Member and click . The Member is displayed in the Selected Members
panel. Click to select all Members which are shown in the Show Members pane. Click
to select all nodes/ members in the server.
You can click to deselect a Member or click to deselect all the Members.
You can click to search for the required member using Alphanumeric code, Numeric Code,
Name, Description, Attribute Name, or Attribute Value.
You can also click button to toggle the display of Numeric Code left, right, or name and click
button to display Alphanumeric Code left, right, or name.
d. Click OK.
The selected Member is displayed as Child under Show Hierarchy panel in the New – Hierarchy
Details window.
4. To add Sibling:
a. Right-click on the Child and select the option Add Sibling.
The Add Member window is displayed.
The Member is displayed in the Selected Members panel. You can click to select all
Members which are shown in the Show Members pane. Click to select all nodes/ members
in the server.
c. You can click to deselect a Member or click to deselect all the Members. You can also
Click to search for the required member.
d. Click Apply.
The selected Member is displayed as Sibling below the Parent under Show Hierarchy panel in
the New – Hierarchy Details window.
5. To add Leaf under a Parent, Child, or Sibling:
a. Right-click the Parent or Child and select Add Leaf.
The Add Member window is displayed.
The Member is displayed in the Selected Members panel. You can click to select all
Members which are shown in the Show Members pane. Click to select all nodes/ members
in the server.
You can click to deselect a Member or click to deselect all the Members. You can also
Click to search for the required member.
c. Click Apply.
The selected Member is displayed as Leaf below the Parent or Sibling under Show Hierarchy
panel in the New – Hierarchy Details window.
6. To define Level Properties:
a. Select Level Properties from the options under Parent, Child, Sibling or Leaf and the Level
Properties window is displayed.
b. Enter the valid Name and Description in the respective fields.
c. Click OK and the Levels defined are displayed in the drop-down in Initial Level Display field in
Hierarchy Properties grid in New – Hierarchy Details window.
7. To cut and paste Child or Sibling:
a. Right-click on any node and select Cut.
b. Right-click on any node and Paste as Child or Paste as Sibling.
8. To Delete and Undelete:
a. Right-click on the node to be deleted and select Delete Node.
The node deleted is stroked out.
b. Right-click and select UnDelete to cancel deletion of the node.
9. To add Child / Sibling / leaf:
a. Right-click on any node and select Create and add Child.
The New - Member Details window is displayed. For more information, see Add Member
Definition.
2. Click View button in the Hierarchies tool bar. The View button is disabled if you have selected
multiple Hierarchies.
The View – Hierarchy Details window is displayed with all the Hierarchy details.
In the View – Hierarchy Details window you can click button to search for a member using the
Alphanumeric Code, Numeric Code, or Member Name in the Search dialog.
NOTE The search functionality of this button will not return any
values if you search for a node in the Orphan Branch of the
hierarchy.
1. Select the checkbox adjacent to the Hierarchy Name whose details are to be updated.
2. Click Copy button in the Hierarchies toolbar to copy a selected Hierarchy definition.
Copy button is disabled if you have selected multiple Hierarchies. The Copy – Hierarchy Details
window is displayed.
In the Copy – Hierarchy Details window you can click button to search for a member using the
Alphanumeric Code, Numeric Code, or Member Name in the Search dialog.
3. In the Copy – Hierarchy Details window you can:
Create new hierarchy definition with existing variables. Specify a new Hierarchy Name. Click
Save.
Create new hierarchy definition by updating the required variables. Specify a new Hierarchy
Name and update the required details.
For more information, see Add Hierarchy Definition. Click Save.
The new Hierarchy definition details are displayed in the Hierarchies window.
2. Click button in the Hierarchies toolbar. The Check Dependencies button is disabled if you have
selected Hierarchy definitions. The Hierarchies Dependency Information window is displayed.
1. Select the checkbox adjacent to Hierarchy Name(s) whose details are to be removed.
5.5 Measure
Business Measure refers to a uniquely named data element of relevance which can be used to define
views within the data warehouse. It typically implies aggregated information as opposed to information at
a detailed granular level that is available before adequate transformations.
Based on the role that you are mapped to, you can access read, modify or authorize Measure. For all the
roles and descriptions, see Appendix A. The roles mapped to Measure are as follows:
Measure Access
Measure Advanced
Measure Authorize
Measure Phantom
Measure Read Only
Measure Write
Business Measure function within the Infrastructure system facilitates you to create measures based on
the area of analysis. While creating a measure, you can choose the aggregation type and apply business
exclusion rules based on your query/area of analysis. Business Measures can be stored as Base and
Computed Measures and can also be reused in defining other multi-dimensional stores and query data
using the various modules of Oracle Analytical Application Infrastructure.
The Business Measures window displays the list of pre-defined Business Measures with their Code, Short
Description, Long Description, Aggregation Function, Entity, and Attribute. You can add, view, edit, copy,
and delete the required Business Measures. You can also search for a specific Business Measure based on
the Code, Short Description, and Authorization status or view the list of existing Business Measures within
the system.
Table 36: Fields in the Business Measure Details and their Description
Field Description
Enter a distinct code to identify the Measure. Ensure that the code is
alphanumeric with a maximum of 8 characters in length and there are no
special characters except underscore “_”.
Note the following:
The code can be indicative of the type of Measure being created.
Enter a Short Description based on the defined code. Ensure that the
Short Description description is of a maximum of 8 characters in length and does not contain
any special characters except underscore “_”.
Aggregator Description
Adds the actual value of attribute or data element to get the measure
SUM
value.
Counts the records for the data element to get the measure value or
COUNT
counts the number of occurrences.
This function acquires the maximum of the data element to get the
MAXIMUM
measure value.
This function obtains the minimum of the data element to get the
MINIMUM
measure value.
Based on the selected Aggregation Function the Data Type is auto populated.
i. Select the Entity to load the data for the Measure from the drop-down list.
The list displays all the entities in the information domain, to which your application is
connected.
ii. Select the required Attribute from the drop-down list.
The list displays all the attributes in the selected entity.
iii. Define the Business Exclusions rules for the base Measure. You can enter the expression
or click button to define using the Expression Builder window.
iv. Define Filter Expression to filter the aggregation process. You can enter the expression or
click button to define using the Expression Builder window.
v. Turn on the Roll Up toggle button to calculate the measure values and to display the
nodes at the total level. By default, the checkbox is selected if the Aggregation Type is
Maximum, Minimum, Count, or Sum. Roll Up option, when selected with Percentage
Measures results in wrong values at intermediate/total levels.
4. Click Save to save the Business Measure details or click Close to discard the changes.
you need to be mapped with the role Measure Write. Delete function permanently removes the Business
Measure details from the database. Ensure that you have verified the details as indicated below:
• A Business Measure definition marked for deletion is not accessible for other users.
• Every delete action has to be Authorized/Rejected by the authorizer.
On Authorization, the Business Measure details are removed.
On Rejection, the Business Measure details are reverted to authorized state.
• You cannot update Business Measure details before authorizing/rejecting the deletion.
• An unauthorized Business Measure definition can be deleted.
To delete an existing Business Measure in the Business Measure window:
1. Select the checkbox adjacent to the required Business Measure code.
The Business Processor window displays the list of pre-defined Business Processors with their Code, Short
Description, Long Description, Dataset, and Measure. The Business Processor window allows you to
generate values that are functions of base measure values. Using the metadata abstraction of a business
processor, power users have the ability to design rule-based transformation to the underlying data within
the data warehouse / store. You can make use of Search and Filter option to search for specific Business
Processors based on Code, Short Description, or Authorized status. The Pagination option helps you to
manage the view of existing Business Processors within the system.
Table 38: Fields in the Business Processor window and their Description
Field Description
Code While creating a new Business Processor, you need to define a distinct
identifier/Code. It is recommended that you define a code that is
descriptive or indicative of the type of Business Processor being created.
This will help in identifying it while creating rules.
Note the following:
It is mandatory to enter a Code.
The Code should be minimum eight characters in length; it can be
alphabetical, numerical (only 0-9) or alphanumerical characters.
The Code should start with an Alphabet.
The Code cannot contain special characters with the exception of the
underscore symbol (_).
The saved Code or Short Description cannot be changed.
Field Description
Short Description Short description is useful in understanding the content of the Business
Processor you are creating. It would help to enter a description based on
the code.
Note the following:
It is mandatory to enter a Short Description.
The Short Description should be of minimum one character and
maximum of 80 characters in length.
Only Alphanumeric, non-English, and Special characters such as “<blank
space>”, “.”, “$”, “&”, “%”, “<”, “>”, “)”, “(“, “_”, and “-” are permitted to be
entered in the Short Description field.
Long Description The long description gives an in-depth understanding of the Business
process you are creating. It would help you to enter a Long Description
based on the code.
The Long Description should be of minimum one character and
maximum 100 characters in length.
Dataset Select the Dataset from the drop-down list. The list of available Datasets
for the selected Information Domain will appear in the drop-down.
The Short Description of the Datasets as entered in the Datasets window
under Business Metadata Management will be reflected in the drop-
down.
Measure Select the Measure from the drop-down list. All base measures that are
defined on any of the tables present in the selected Dataset will appear
in the drop-down.
If the underlying measure is deleted after the Business Processor
definition, then the corresponding Business Processor definition will
automatically be invalidated.
Expression
Click button. The Expression window is displayed.
For more details on creating an expression using entities, functions and
operators, see Create Expression section.
The placeholder option enables the user to provide values for the
constants in the expression. The user can specify values to the business
processor expression during the Run time rather than at definition time
through the place holders defined while specifying the expression. The
user can specify the expression in the “Expression” field.
Note the following:
The values for the placeholders can be alphanumeric.
The process of specifying place holders enables the user to execute the
same business processor definition with different values during the Run
time.
Field Description
Expression has Aggregate The expression may require an aggregation function depending on the
Function business logic. The aggregation functions have to be entered in the
expression field per acceptable syntax. IF an aggregation function is
used in the expressions, the checkbox “Expression has Aggregate
Function” must be enabled. Leave the checkbox “Expression has
Aggregate Function” blank if your expression does not contain an
aggregation function.
Click button in the Business Processor Definition grid to refresh the entries.
Click Parameters to specify default values for any of the placeholders defined.
The Parameters window is displayed.
i. Enter a default value for the place holders defined along with the expression in the Default
Value field.
ii. Click Save to save the default value for a placeholder.
The User Info grid at the bottom of the window displays the metadata information about the
Business Processor definition created along with the option to add comments.
3. Click Save. The Business Processor is saved and listed in the Business Processor window after
validating the entries.
The View Business Processor window displays the details of the selected Business Processor
definition. The User Info grid at the bottom of the window displays the metadata information about
the Business Processor definition along with the option to add comments.
2. Click Copy button from the Business Processor tool bar. Copy button is disabled if you have
selected multiple checkboxes.
The Copy Business Processor window is displayed.
3. Edit the Business Processor details as required. It is mandatory that you change the Code and Short
Description values.
For more information see Add Business Processor.
4. Click Save.
The defined Business Processor is displayed in the Business Processor window.
5.7 Expression
An Expression is a user-defined tool that supplements other IDs and enables to manipulate data flexibly.
Expression has three different uses:
• To specify a calculated column that the Oracle Financial Services Analytical Application derivatives
from other columns in the database.
• To calculate assignments in data correction.
• To create calculated conditions in data and relationship filters.
Example:- Calculations like average daily balances, current net book balance, average current net book
balance, and weighted average current net rate can be created through Expressions.
Based on the role that you are mapped to, you can access read, modify or authorize Expression window.
For all the roles and descriptions, see Appendix A.
The roles mapped to Expression are as follows:
Expression Access
Expression Advanced
Expression Authorize
Expression Phantom
Expression Read Only
Expression Write
The Expression Summary window displays the list of pre-defined Expressions with other details such as
the Expression Name, Folder Name, Return Type, Created By, and Creation Date. For more information on
how object access is restricted, see Object Security in Dimension Management module section.
You can also search for a specific Expression definition based on Folder Name, Expression Name, or
Return Type and view the list of existing definitions within the system.
NOTE Expression Name: The characters &' " are restricted in the
name field.
Description: The characters ~&+' "@ are restricted in the
description field.
NOTE A user with Phantom and Write role can modify or delete the
expression even though the access type is selected as Read-
only.
Read/Write: Select this option to give all users the access to view, modify (including Access
Type) and delete the expression.
3. In the Entity Group Selection grid:
In the Variants section, click button The Variant Selection window is displayed.
Select the Entity Type and Entity Name from the drop-down lists.
You can also click to deselect a Member or click to deselect all Members.
Click OK.
The selected Entity Name and Members are displayed in the Variants section in the New
Expression window.
In the Variant’s section, click “+” to expand Entity Group and double-click to select the required
Entity.
The selected Entity is displayed in the Expression grid.
In the Function section, click “+” to expand Functions and select a function such as
Mathematical, Date, String, or Others options.
The selected Function is displayed in the Expression grid. For more information see Function
Types and Functions.
In the Operators section, click “+” to expand Operators and select an operator such as
Arithmetic, Comparison, or Others.
The selected Operator is displayed in the Expression grid. For more information see Operator
Types.
You can click button from the Add Constant grid to specify a Constant Value. Enter
the numerical value and click .
In the Expression grid, you can right-click on the expression and do the following:
You can also click button in the Expression grid to clear the Expression.
4. Click Save to validate the entries and save the new Expression.
2. Click Edit button and the Edit – Expression window is displayed. Modify the required changes.
For more information, see Add Expression Definition.
3. Click Save and upload the changes.
2. Click Copy button in the Expressions tool bar. Copy button is disabled if you have selected
multiple checkboxes.
The Copy – Expression window is displayed.
3. In the Copy – Expression window you can:
Create new Expression with existing variables. Specify a new Filter Name and click Save.
Create new Expression by updating the required variables. Specify a new Expression Name and
update the required details.
For more information, see Add Expression Definition. Click Save.
The new Expression details are displayed in the Expression Summary window.
2. Click button in the Expressions tool bar. The Check Dependencies button is disabled if you
have selected multiple expressions.
The Dependent Objects window is displayed with Object id, Name, and id type of the dependent Objects.
5.8 Filter
Filters in the Infrastructure system allows you to filter metadata using the defined expressions.
The Filters Summary window displays the list of Filters created in all public folders, shared folders to which
you are mapped and private folders for which you are the owner, along with the other details such as the
Name, Type, Modification Date, and Modified By.
For more information on how object access is restricted, see Object Security in Dimension Management
module section.
You can also search for a specific Filter definition based on Folder Name, Filter Name, or Type and view
the list of existing definitions within the system. If you have selected Hierarchy from the Type drop-down
list, the Dimension drop-down list is also displayed.
Table 39: Fields in the Filter Definition window and their Description
Field Description
Filter Details
Folder Name Select the Folder Name where the Filter is to be stored from the drop-
down list.
The Folder selector window behavior is explained in User Scope section.
Click to create a new private folder. The Segment Maintenance window
is displayed. For more information, see Segment Maintenance.
Note: You can select Segment/Folder Type as Private and the Owner
Code as your user code only.
Filter Name Enter the filter name in the Filter Name field.
Note: The characters &’ ” are restricted.
3. From the Filter Type Selection pane, select the Filter Type from the drop-down list.
There are four different Filter Types available in the Filter Type Selection grid as tabulated. Click the
links to navigate to the appropriate sections.
The following table describes the fields in the Filter Type pane.
Table 40: Fields in the Filter Type pane and their Description
Filter Description
Data Element Data Element Filter is a stored rule that expresses a set of constraints.
Only columns that match the data type of your Data Element selection are
offered in the Data Element drop-down list box.
Example: Balances between 10,000 and 20,000 Accounts opened in the
current month Loans with amortization terms greater than 20 years.
Data Element Filters can access most instrument columns and most
columns in the Management Ledger. Data Element Filters are used within
other OFSAA rule types
(e.g., Allocation rules, Transfer Pricing rules, Asset | Liability Management
rules, and others).
Filter Description
Hierarchy Hierarchy Filter allows you to utilize rollup nodes within a Hierarchy to
help you exclude (filter out) or include data within an OFSAA rule.
Example: You might want to process data for a specific set of divisions or
lines of business where you have a Hierarchy rule that expresses those
divisions or lines of business as rollup nodes. A Hierarchy Filter could be
constructed to "enable" the Commercial and Retail lines of business while
NOT enabling the Wealth Management line of business. Each of these lines
of business might include a handful or even thousands of cost centers.
When incorporated into an OFSAA processing rule, this Hierarchy Filter
would include every cost center in the Commercial and Retail lines of
business.
Group Group Filters can be used to combine multiple Data Element Filters with a
logical "AND".
Example: If Data Element Filter #1 filtered on mortgage balances greater
than 100,000 and Data Element Filter #2 filtered on current mortgage
interest rates greater than 6%, you could construct a Group Filter to utilize
both Data Filters. In this case, the resulting Group Filter would constrain
your data selection to mortgage balances greater than 100,000 AND
current mortgage interest.
Attribute Attribute Filters are created using defined Attributes. Attribute filters
facilitates you to filter on one or more Dimension Type Attributes. For each
attribute, you can select one or more values.
Example: Consider a filter that selects all records where the dimension
Common Chart of Account member represents an attribute value Expense
account, i.e., the attribute "Account Type" = Expense.
Now, using Attribute Filters, you can specify complex criteria as given
below:
Common Chart of Accounts where the Account Type attribute is Earning
Assets or Interest-bearing Liabilities, and the Accrual Basis attribute is
Actual/Actual
Also, You could further refine the filter by adding another condition for:
Organizational Unit where the Offset Org ID is a specific Org member
The Filter then saves these criteria rather than the member codes which
meet the criteria at the time the Filter is saved. During execution, the
engine dynamically selects all records from your processing table (e.g.
Mortgages, Ledger, etc.), which meet the specified member attribute
criteria.
After the required filter conditions are defined, save the Filter definition.
Select any of the following Filter Classification Type from the drop-down list:
Classified - This is the default selection and displays all the classified EPM specific
entities. If you are an EPM user, you need to select this option while defining Data Element
Filter to list all the related entities.
Unclassified - This option displays all the non-classified i.e. non EPM specific entities. If
you are a non EPM user, you need to select this option while defining Data Element Filter
to list all the related entities.
All - This option will select all the tables available in the selected Information Domain
irrespective of whether an entity has its table is classified or not.
Select the required database table from the Entity Name drop-down list. The associated
members are displayed in the Show Members section.
Select the required member and click . The member is listed in the Selected Members panel.
Click to move all Members.
For each column you wish to include in your Data Filter definition, you must specify one of the
following Filter Method:
The following table describes the fields in the Data Filter Definition.
Table 41: Fields in the Data Filter Definition window and their Description
Filter Description
Specific Values Specific Values are used to match a selected database column to a
specific value or values that you provide. You may either include or
exclude Specific Values.
You can add additional values by clicking the Add button. Click
adjacent to Add button to add 3, 5, 10 rows by selecting the checkbox
adjacent to 3, 5, or 10 respectively. You can add custom number of rows by
specifying the number in the text box provided, as shown and click .
Filter Description
Ranges Ranges are used to match a selected database column to a range of values
or to ranges of values that you provide. You may either include or exclude
Range values.
Range Type is available for OFSA Datatype Term, Frequency, Leaf, Code,
and Identity and Column Datatype Date, Numeric and Varchar.
You can add additional values by clicking the Add button. Click
adjacent to Add button to add 3, 5, 10 rows by selecting the checkbox
adjacent to 3, 5, or 10 respectively. You can add custom number of rows by
specifying the number in the text box provided, as shown and click .
Another Data Element Another Data Element is used to match a selected database column to
another database column. When constructing an Another Data Element
Filter Method, you may only compare a column to other columns that you
have already selected (the Data Element drop-down list box will only
contain columns that you have already selected).
You may use any of the following operators when choosing the Another
Data Element Filter Method:
=, <> (meaning "not equal to"), <, >, <=, or >=.
Click Add to list the completed filter conditions in the Filter Conditions grid.
Click Update after modifying a filter condition to update in the Filter Conditions grid.
The Show Hierarchy tab displays the leaves in each node in ascending order of Members.
In order to sort the nodes alphabetically, HIERARCHY_IN_FILTER_SORT-$INFODOM$-
$DIMENSION_ID$=$VALUE$ in the AMHMConfig.properties file present in the deployed location
should be set as Y. You should add such entry for all the required Dimension IDs for the sort
functionality to work for those dimensions.
For example:
HIERARCHY_IN_FILTER_SORT-OFSAAINFO-4345=Y
Restart servers after making any change in AMHMConfig.properties file for the change to take
effect.
NOTE Select the Pagination icon to view more options under the
available components. Click the More Options (three dots) icon
to enable the Pagination buttons.
Click button to search for a hierarchy member using Dimension Member Alphanumeric
Code, Dimension Member Numeric Code, Dimension Member Name, or Attribute and by keying
in Matching Values in the Search dialog. The search results are also displayed in the ascending
order of Member Names.
Click to collapse the members under a node.
You can also click button to find a member present in the nodes list using key words. For
large tree (nodes>5000), this search will not return any value if the tree is not expanded.
4. Click Save to validate the entries and save the filter details.
The Show Members tab displays all the selected nodes in a list view, which helps you visualize all
the selected nodes as a list rather than as a tree. Currently, this feature is available in the Edit and
View mode of the Hierarchy Filter.
click . The selected members are displayed in the Selected Filters section. Click to select all
the Members.
You can click to deselect a Member or click to deselect all the Members.
You can also click button to search for a member in the Data Element Filter Search dialog using
Folder Name and Filter Name.
2. Click Save to validate the entries and save the filter details.
Select Attribute Value(s) in the Attribute Values grid and click button to delete it.
You can use the Attribute Values present in the Attribute Values grid to generate conditions.
5. Click Add button in the Attribute Values grid. The Filter Conditions grid is populated with the filter
condition using all the Attribute values.
You cannot define two conditions using the same attributes. Because conditions are joined with a
logical ‘AND’ and this will make the query invalid.
In the Filter Conditions grid, you can select a condition to view the Attribute Values used to generate
it and can update the condition.
You can also click button to view the SQL statement in View SQL window. Click button to
view a long filter condition in View Condition dialog.
6. Click Save. The Attribute Filter definition is saved.
2. Click Edit button and the Edit – Filter Details window is displayed. Modify the required changes.
For more information, see Add Filter Definition.
3. Click Save to save the changes.
2. Click Copy button in the Filters tool bar. Copy button is disabled if you have selected multiple
checkboxes. The Copy – Filter Details window is displayed.
3. In the Copy – Filter Details window you can:
Create new filter definition with existing variables. Specify a new Filter Name and click Save.
Create new filter definition by updating the required variables. Specify a new Filter Name and
update the required details. For more information, see Add Filter Definition. Click Save.
The new filter definition details are displayed in the Filters Summary window.
2. Click button in the Filters tool bar. The Check Dependencies button is disabled if you have
selected multiple members.
The Dependent Objects window is displayed with Object ID, Name, and ID Type of the dependent
Objects.
2. Click View SQL button. The SQL equivalent of the selected filter is displayed in the View SQL
window.
1. Select the checkbox adjacent to the Filter Name whose details are to be removed.
NOTE The Definitions that you select within a page are considered for
download. To select records across multiple pages, consider
performing multiple downloads or increase the page size to
display more records.
You can also select all the filters that appear in the current page
by clicking the Select All checkbox in the header of the records.
2. Click Download.
A prompt appears to download the OFSAA_FILTER.xls file.
3. Download the XLS file to your local machine.
To delete the Filter Definitions in the XLS file, follow these steps:
1. Delete a row to remove the Filter Conditions from the definition.
To add or update a Filter Definitions in the XLS file, follow these steps:
To add or update a Filter Definitions in the XLS file, you must populate certain mandatory columns in each
of the sheets. The details are described in the following:
NOTE The Bulk Upload utility does not support incremental updates
of the definitions (Delta updates). The upload deletes the
existing Filter Conditions and replaces them with the Filter
Conditions present in the XLS file.
• SEQUENCE
• DIMENSION ID
• ATTRIBUTE ID
• ATRRIBUTE VALUE
Other columns and their values are for reference purposes and they will not affect the Filter Definitions
when uploaded back in the OFSAA System.
NOTE The Bulk Upload utility does not support incremental updates
of the definitions. The upload deletes the existing Filter
Conditions and replaces them with the Filter Conditions present
in the XLS file. In other words, the entries present in the XLS file
for a particular definition is the final metadata for that
definition in OFSAA.
The following sections describe the options to upload the Filter Definitions.
To upload the Filter Definitions through the Command Line, follow these steps:
1. Copy the modified XLS file to the $FIC_HOME/utility/FilterUpload directory in the OFSAA
Installation.
2. Run the Shell Script Utility from the $FIC_HOME/ficdb/bin directory as shown in the following:
./FilterUploadUtility.sh <infodom> <userid> <UNIQUE_IDENTIFIER>
For example,
./FilterUploadUtility.sh INFODOM USER UNIQUE_IDENTIFIER_1111
The parameter [UNIQUE_IDENTIFIER] is optional and it helps users trace issues registered in the
Database Table that is mapped to the UNIQUE_IDENTIFIER.
The XLS is uploaded and the Filter Definitions are saved post validation.
The respective errors and logs are available in the log files and the Config Schema. You can filter them by
UNIQUE_IDENTIFIER and view them.
The Table names with reference to the logs are: AAI_UTILS_AUDIT and AAI_UTILS_AUDIT_DETAILS.
5.8.9.4.2 Upload the Filter Definitions through the OFSAA Batch Window
To upload the Filter Definitions through the OFSAA Batch Window, follow these steps:
1. Go to the Batch Maintenance Window.
2. Select Add to add a new Batch or proceed to step 5 to update an existing Batch.
3. Provide the required details such as Batch Name, Batch Description, Batch ID, Sequential Batch, and
Duplicate Batch.
4. Click Save to save the Batch.
5. Select the Batch again from the list of all Batches to add a task.
6. Click Add to add a new task or proceed to step 13 to update an existing task.
7. Enter the Name and Description for the task.
8. Select Run Executable from the Components drop-down.
9. Select Datastore Type, Datastore Name, and Primary IP for Runtime Processes.
10. Enter details in the Executable field as shown in the following:
./FilterUploadUtility.sh, [infodom], [userId]
For example,
./FilterUploadUtility.sh, INFODOM_NAME, EXAMPLEUSER
11. Enter Wait Value, Batch Parameter, and Optional Parameters.
12. Click Save to save the task.
13. Go to the Batch Execution Window.
14. Select the required Batch.
15. Enter the date for the Batch to Run.
16. Select Execute Batch.
17. Click Ok in the confirmation popup.
The selected Batch runs to update the definitions in the OFSAA System.
mapper definition as the default Security mapper for an information domain. Based on the members
mapped in a security mapper, the hierarchy browser window in OFSAAI framework displays the members
of the hierarchy along with its descendants.
For understanding the Hierarchy Security feature, see Scenario to Understand Hierarchy Security section.
To access the Map Maintenance window, you should be mapped to Access role. To create, modify, and
delete a mapper, you should be mapped to Write role.
Based on the role that you are mapped to, you can access, read, modify, or authorize Map Maintenance.
For all the roles and descriptions, see Appendix A. The roles mapped to Map Maintenance are as follows:
Mapper Access
Mapper Advanced
Mapper Authorize
Mapper Phantom
Mapper Read Only
Mapper Write
The Map Maintenance window displays the Name, Version, Description, Dynamic, Inherit Member, Map
Type, and Database View name for the available mapper definitions created in the selected Segment and
Infodom. Segments facilitate the classification of related metadata in a single segment. You have access to
only those metadata objects that are mapped to the same segment to which you are mapped.
1. Click Create new Map from the tool bar. The Mapper Definition – New window is displayed.
All hierarchies including the default user group hierarchy for the selected Infodom are listed under
the Members pane.
2. Enter the mapper definition details as tabulated:
The following table describes the fields in the Mapper Definition window.
Field Description
Dynamic By default, the checkbox is selected and you do not have the option to
deselect this. The dynamic attribute is associated with a mapper definition
which facilitates the accommodation of latest members of a slowly
changing dimension by leveraging the push down functionality.
Field Description
Map Type This drop-down list is enabled only if the Dynamic checkbox is selected.
Otherwise, data filter is selected and this field is disabled.
Select the Map type. The available options are:
Data Filter: Select this option to define a data filter type mapping, which
does not require a user group hierarchy to be selected among the
participating hierarchies.
Security Filter: Select this option to define a security filter type mapping,
which can be used to restrict access to members of a hierarchy based on
user groups. For a security filter, the user group hierarchy should be
attached with the definition. You can add other hierarchies to this definition
and will not have the option of saving the mapper definition without using a
User Group hierarchy.
Pushdown Select the checkbox if you want implicit push down of the mappings
whenever mappings are modified and saved through the Mapper
Maintenance window.
Database Entity Name Enter the name for the table/entity to be created in the atomic schema that
will be used to store the exploded mappings. The database entity name can
be alpha numeric, however should not start with a numeric character.
Database View Name Enter the Database View name to be created for the selected database
entity. The View will be created in the atomic schema with Hierarchy code
as the column name.
3. Click the required hierarchies from the Members pane. The selected hierarchies are displayed under
the Selected Members pane.
mapping. You can add multiple mappings among the hierarchies. The mappings will be stored in the
database entity/table you have created during the mapper definition for further processing i.e. push down
operation. After defining all mappings, you can push down the mappings to be effective in the system
(The push down will be implicit if the same was opted at the mapper definition time). You need to be
mapped to the role Mapper Access to access the Mapper Maintenance feature.
To define the mappings:
1. From the Map Maintenance window, select the mapper definition and click Mapper
Maintenance. The Map window is displayed.
Based on the hierarchies participating in the mapper definition, the search fields will be displayed.
The Search fields are enhanced with the autocomplete drop-down feature. You need to enter at
least 4 characters to display the drop-down options.
2. Click Add on the Member Combinations toolbar.
The hierarchies that were selected in the Mapper Definition window appear in the Map window,
along with their members.
You can select (pagination) icon to view more options under the selected member.
3. Select the required hierarchy members from each hierarchy and click View Mappings to view the
already available mapping combinations with the selected hierarchy members. The View Mappings
window is displayed.
4. Click Close.
5. To add a new mapping from the Add Mappings window, select the required hierarchy members
from each hierarchy and the corresponding user group to which you want to map in case of security
mapper and click Go. Each mapping definition gets listed in the below grid. You should select at
least one member from each hierarchy to obtain a complete mapping.
NOTE If a child is mapped and parent is not mapped, the parent will
be displayed as disabled in the hierarchy browser window.
Click to focus only on the selected branch. The Available Values pane shows the members
Click to display member's numeric codes on the right. The icon changes to .
Click to display member's numeric codes on the left. The icon changes to .
Click to show only member names. This is the default view. The icon changes to .
Click to display member's alphanumeric codes on the right. The icon changes to .
Click to display member's alphanumeric codes on the left. The icon changes to .
Click to display only member names. This is the default view. The icon changes to .
6. Enter the mapping details as tabulated:
The following table describes the fields in the Mapping Definition window.
Table 43: Fields in the Mapping Definition window and their Description
Field Description
Macro This drop-down list allows you to define conditions based on which the
members will be mapped. The options are:
Self Only: Select this option if you want only the selected member to be
mapped. If this option is selected, the hierarchy browser will display the
selected member in enabled mode. If it has any descendants, those will be
displayed in disabled mode.
Self & Desc: Select this option if you want the selected members along its
descendants to be mapped.
Exclude Select Yes if you want to exclude certain members from being mapped.
For example, if you want to map a hierarchy to all user groups except one
user group say UG1, then map the hierarchy to UG1 and select the Exclude
option as Yes. This will ensure that all users belonging to user groups
except UG1 can access all the members of the hierarchy.
7. Click Save. All the mappings will be listed in the Member Combinations grid.
8. You can use the copy functionality to copy an already created mapping and edit the required fields.
To copy a mapping,
a. Select the mapping you want to copy, from the Member Combinations grid and click Copy.
The Copy Mapping window is displayed with all Hierarchies participating in the mapping.
b. Select the Macro and Excluded information for the mapping and click Save. The copy of the
mapping will appear in the Member Combinations grid.
9. Click Pushdown to refresh the mapping of participating hierarchies available in the system. A
service will push down the mappings based on Config Schema Data (used combinations having
macros) in to the atomic schema (exploded mappings). The pushed down mapping i.e. the
exploded mappings will be displayed in the Mapped members pane.
10. Select a mapping from the first panel and click Remove if you want to remove the mapping from
the mapper. You should click Pushdown to effect these changes in the system.
Click Default Security Map button on the toolbar to set a mapper as a default secure mapper. Once
selected, this information will be displayed in the Mapper Summary window. A delete icon will also be
available adjacent to it to remove the default security map from the system.
NOTE A Security Filter type mapper definition having the user group
hierarchy (seeded by OFSAAI) in its definition can only be
identified as a default security mapper and this validation will
be performed by the application. When a mapper is set as the
default security map in an information domain, it overrides the
existing default security map if present in the Infodom.
2. Click Edit Map button from the tool bar. The Mapper Definition window is displayed.
3. Update the Comments field or the push down option as desired (The push down option will be
available for edit, only in case of dynamic mapper definitions and this option will be disabled in case
of non-dynamic mapper definitions).
4. Click Save and update the changes.
The parent node /default node of the new hierarchy will get mapped with existing hierarchy
member combinations
You need to select a hierarchy that has default data. Otherwise, an alert message is displayed
prompting you to select a hierarchy with default data.
• You cannot edit the fields Dynamic and Map Type.
• Pushdown will not happen automatically. You need to do the Pushdown operation of the new
Mapper definition explicitly.
To copy an existing Mapper Definition in the Map Maintenance window:
1. Select the checkbox adjacent to the Mapper Name which you want to copy.
2. Click Copy Map button in the tool bar. The Copy button is disabled if you have selected multiple
checkboxes. The Mapper Definition- Copy window is displayed.
3. Enter the required details in the Description, Database Entity Name, Database View Name and
Comments fields. For more information, see Creating a Mapper Definition.
4. Select the Pushdown checkbox if you want implicit push down of the mappings whenever
mappings are modified.
5. Select the required hierarchies from the Members pane. The selected hierarchies are displayed
under the Selected Members pane. Click Save.
The new Mapper definition details are displayed in the Map Maintenance window. Select the new
Mapper and click Mapper Maintenance button in the tool bar to add mappings to the newly
added hierarchies.
2. Click Delete Map button from the tool bar. A confirmation dialog is displayed. If a default
security map was selected for deletion, then the same will be indicated in the confirmation dialog.
The mapper code will be followed by ‘(D)’ to indicate that the default security map has also been
selected for deletion.
3. Click OK. The Mapper definition details are deleted.
5.10.1 Dimension
Business Dimension within the Infrastructure system facilitates you to create a logical connection with
measures. It gives you various options across which you can view measures. A Business Dimension is a
structure of one or more logical grouping (hierarchies) that classifies data. It is the categorization across
which measures are viewed. A dimension can have one or more hierarchies.
You can access Business Dimension window by expanding Unified Analytical Metadata and Analytics
Metadata within the tree structure of the LHS menu and selecting Dimension.
The dimension specific details are explained in the following table.
Field Description
Dimension Properties Displays the Dimension Type and Data Type of the selected dimension object.
Based on the role that you are mapped to, you can access read, modify or authorize Dimension. For all the
roles and descriptions, see Appendix A. The roles mapped to Business Dimension are as follows:
• Dimension Access
• Dimension Advanced
• Dimension Authorize
• Dimension Phantom
• Dimension Read Only
• Dimension Write
Based on the user requirements you can define different dimensions as Regular, Time, or Measure. A
Dimension combined with measures helps in business query. Since dimension data is collected at the
lowest level of detail and then aggregated into higher-level totals, it is useful for analysis.
The Business Dimension window displays the list of pre-defined Business Dimensions with their Code,
Short Description, Long Description, and Dimension Type. In the Business Dimension window, the user is
required to enter the Dimension code and a description when the user is defining it for the first time. The
user is required to select the dimension type, data type, and map available hierarchies to a dimension.
You can also make search for a specific business dimension based on the Code, Short Description, and
Authorization status or view the list of existing business dimensions within the system.
Table 45: Fields in the Add Business Dimension Details window and their Dimension
Field Description
Enter a distinct code to identify the Dimension. Ensure that the code is
alphanumeric with a maximum of eight characters in length and there are
no special characters except underscore “_”.
Note the following:
The code can be indicative of the type of Dimension being created.
Enter a Short Description based on the defined code. Ensure that the
Short Description description is of a maximum of eight characters in length and does not
contain any special characters except underscore “_”.
Select the Dimension Type from the drop-down list. The available options
are:
Regular: A regular dimension can have more than one hierarchy mapped
to it. The option of mapping multiple hierarchies is available only for a non-
SQLOLAP environment.
Dimension Type
Time: In a time dimension, the hierarchy defined has leaves/nodes of high
time granularity.
Measure: A measure dimension can have hierarchies of only type measure
mapped to them it. The Measure hierarchy type is specific to Essbase
MOLAP.
3. Click button in the Hierarchies grid. The Hierarchy Browser window is displayed.
Based on the dimension type, the hierarchies are displayed in the Members pane. You can expand
and view the members under the Hierarchies by clicking “+” button.
Select the hierarchies from the Members pane and click . The selected hierarchies are
moved to the Selected Members pane.
2. Click Edit button from the Business Dimension tool bar. The Edit Business Dimension window is
displayed.
3. Update the required details. For more information, see Create Business Dimension.
4. Click Save and update the changes.
2. Click Delete button from the Business Dimension tool bar. A confirmation dialog is displayed.
3. Click OK. The Business Dimension details are marked for delete authorization.
5.10.2 Cubes
Cube represents a multi-dimensional view of data which is vital in business analytics. It gives you the
flexibility of defining rules that fine-tune the information required to reflect in the hierarchy. Cube
enhances query time and provides a decision support for Business Analysts.
A cube is a combination of measures and dimensions, that is, measures represented along multiple
dimensions and at different logical levels within each dimension. For example, in a cube, you can view
Number of Customers, Number of Accounts, and Number of Relationships by Product, Time, and
Organization.
The Essbase Cube Summary window displays the list of pre-defined Essbase Cubes with their Code, Short
Description, Long Description, and MDB Name. By clicking the Column header names, you can sort the
column names in ascending or descending order. Click if you want to retain your user preferences so
that when you login next time, the column names will be sorted in the same way. To reset the user
preferences, click .
You can add, view, edit, copy, and delete an Essbase Cube. You can search for a specific Essbase Cube
based on the Code, Short Description, and Authorization status.
When you are defining Essbase cube for the first time, you need to specify the Cube definition details and
the Cube-Building components such as Dimension, Variation, Intersecting details, DataSet, Formulae, and
Roll Off period details. Your User Group should be mapped with the User Role ‘Essbase Cube Write’ to
create or add an Essbase Cube.
Note the following:
Table 46: Fields in the Essbase Details window and their Description
Field Description
Enter a distinct code to identify the Cube. Ensure that the code is
alphanumeric with a maximum of 8 characters in length and there are no
special characters except underscore “_”.
Note the following:
The code can be indicative of the type of Cube being created.
Enter a Short Description based on the defined code. Ensure that the
Short Description description is of a maximum of 8 characters in length and does not contain
any special characters except underscore “_”.
Enter the name by which you want to identify the cube while saving it in a
multi-dimensional database.
Saving a cube to a multi-dimensional database is different from saving the
Cube definition wherein the definition (like all other metadata definitions) is
stored in the repository. When saved, the cube details are updated by the
MDB Name
cube name that you have attributed to it. Ex: NoofProd (Number of
Products)
Note: Ensure that the name is within 1 to 8 characters in length and can
contain alphabetical, numerical (only 0-9), or alphanumerical characters
without special characters and extra spaces.
Turn ON the toggle button if you wish to capture all incremental changes
Is Build Incremental made to the database. The cube definitions with the Is Build Incremental
toggle button turned ON can be executed with different MIS dates.
Field Description
In the Dimension tab, the Available list consists of the pre-defined Dimensions.
Select the required Dimension for the cube and click button.
In the Variation tab, you can define the Variation by mapping the Dimension
against the defined Measure.
Variation
Note that the Intersection option is specific to Count Distinct Measures. The
Count Distinct Measures should be intersected only across those dimensions
on which a duplicate is expected for that measure.
For example, there can be no customer who has both gender as Male and
Female. Thus intersecting the Count distinct measures across a Gender
dimension will not make sense. Similarly, the Count Distinct measures will
have duplicates across Products or Regions. Thus, the intersecting can be
across those dimensions (Product/Region). For more information, see
“Selecting Aggregation Function” in Business Measures section.
Intersection
Figure 128: Intersection tab
Select the required Dimension from the drop-down list corresponding to the
Measure.
Note: Mapped Intersection should be a subset of mapped Variation.
In the Dataset tab, you can select the Dataset for the cube along with the
additional filters like the Date Filter and Business Exclusions.
Dataset
Select the required Dataset from the drop-down list. The selected From
Clause and Join Condition for the selected Dataset are displayed.
To define the Date Filter, click button. The Expression Builder window is
displayed. Define the required expression by selecting the appropriate Entities,
Functions, and Operator. Click OK.
To define the Business Exclusion, click button. The Expression Builder
window is displayed. Define the required expression by selecting the
appropriate Entities, Functions, and Operator. Click OK.
Note that the Formulae tab is specific to Essbase MOLAP. In the Formulae tab,
you can apply filters to a hierarchy node.
Formulae
When you select a Dimension from the Selected Dimensions drop-down list,
the mapped Hierarchies will be listed out in the Hierarchies drop-down list.
Click button adjacent to Node Formula. The Expression Builder window is
displayed. Define the required expression by selecting the appropriate Entities,
Functions, and Operator. Click OK.
In the Roll Off tab, you can define the start date of the cube to specify the
history of the data which is to be picked up during aggregation. The maximum
period of data history that can be specified is 24 months. The Roll Off option is
enabled only to BI enabled hierarchies.
Roll Off
Turn ON the Roll Off Required toggle button.
Click to specify the Roll Off Period value (in integer) for which the data
should be maintained in the system. The data will be automatically rolled off
with the addition of new nodes to the cube.
Select the Dimension for which you want to specify the roll off period from the
drop-down list.
Select the Level from the drop-down list. The list contains the hierarchy levels
of the selected Dimension.
4. Click Save and save the Essbase Cube Definition details. A confirmation dialog is displayed.
The Cube definitions are stored in repository and accessed for query. Once saved, the cube details
are displayed with non-editable Code and Short Description fields.
You can view the metadata of a selected Essbase Cube definition at any given point. You need to be
mapped to the User Role Essbase Read Only to view Essbase Cube definition.
To view the existing Essbase Cube definition details:
From the Essbase Cube Summary window, select the Essbase Cube definition and click View.
The Essbase Cube Details window is displayed.
The User Info tab displays the metadata properties such as Created By, Creation Date, Last
Modified By, Modified Date, Authorized By, and Authorized Date.
The User Comments tab has a text field to enter additional information as comments about the
created Cube definition.
Click Close.
The Copy function is similar to “Save As” functionality and helps you to copy the pre-defined Essbase
Cube details to quickly create another Essbase Cube. Your User Group should be mapped to ‘Essbase
Cube Write’ User Role to copy the Cube details.
To copy Essbase Cube definition:
1. From the Essbase Cube Summary window, select the Essbase Cube definition and click Copy.
The Essbase Cube Details window is displayed.
2. Enter the Code, Short Description, Long Description and MDB Name. For more information, see
Create Essbase Cube section. You can also modify the cube components as required.
3. Click Save and save the updated details. A confirmation dialog is displayed.
1. From the Essbase Cube Summary window, select the Essbase Cube definition and click Edit. The
Essbase Cube Details window is displayed.
2. Modify the Essbase Cube definition with the cube components details as required. For more
information, see Create Essbase Cube section.
3. Click Save and save the updated details. A confirmation dialog is displayed.
You can remove Essbase Cube definition(s) which are created by you and which are no longer required in
the system by deleting from the Essbase Cube Summary window. You need to have Essbase Cube Write
User Role mapped to delete an Essbase Cube. Delete function permanently removes the Essbase Cube
details from the database. Ensure that you have verified the details as indicated below:
• An Essbase Cube definition marked for deletion is not accessible for other users.
5.11 References
Function Function
Notation Description Syntax Example
Type Name
Ceiling Ceiling (a) Rounds a value to the Ceiling(colum 3.1 becomes 4.0, 3.0
next highest integer n or stays the same
expression)
Function Function
Notation Description Syntax Example
Type Name
Function Function
Notation Description Syntax Example
Type Name
Note : You cannot use the Maximum and Minimum functions as calculated columns or in Data Correction
Rules. The Maximum, Minimum, Sum, and Weighted Average functions are multi-row formulas. They use
multiple rows in calculating the results.
Function Function
Notation Description Syntax Example
Type Name
For Example:
Function Function
Notation Description Syntax Example
Type Name
Function Function
Notation Description Syntax Example
Type Name
Database Functions
Specific to MS SQL server which consists of Date & Time, Math, and
Transact SQL
System functions.
Operator Types
Arithmetic +, -, %, * and /
Comparison '=', '!=', '< >', ‘>', '<', 'IN', 'NOT IN, 'ANY', 'SOME', 'LIKE' and 'ALL'.
Others The Other operators are 'PRIOR', '(+)', '(' and ')'.
In a Regular Hierarchy Type, you can define the following Hierarchy Sub
Types:
Non Business Intelligence Enabled
In a non Business Intelligence Enabled Hierarchy, you need to manually add
the required levels. The levels defined will form the Hierarchy.
Business Intelligence Enabled
Regular
You can Enable Business Intelligence hierarchy when you are not sure of the
Hierarchy structure leaf values or the information is volatile and also when the
Hierarchy structure can be directly selected from RDBMS columns. The
system will automatically detect the values based on the actual data.
Parent Child
This option can be selected to define a Parent Child Type hierarchy.
A Measure Hierarchy consists of the defined measure as nodes and has only
Measure
the Non Business Intelligence Enabled as Hierarchy Sub Type.
You can select the required Business Hierarchy from the drop-down list and specify the Hierarchy Sub
Type details. The window options differ on selecting each particular Hierarchy type. Click on the following
links to view the section in detail.
• Regular Hierarchy
• Measure Hierarchy
• Time Hierarchy
When you have selected Regular - Non Business Intelligence Enabled Hierarchy option, do the following:
1. Click button in the Entity field. The Entity and Attribute window is displayed.
You can either search for a specific Entity using the Search and Filter pane or select the
checkbox adjacent to the required Entity in the Available Entities list. The list of defined
Attributes for the selected entity is displayed Available Attributes list.
You can either search for a specific Attribute using the Search and Filter pane or select the
checkbox adjacent to the required Attribute in the Available Attributes list.
Click Save. The selected Entity and Attribute is displayed in the Add Business Hierarchy window.
2. Click button from the Business Hierarchy tool bar. The Add Node Values window is displayed.
Table 53: Fields in the Add Node Values window and their Description
Field Description
Short Description Enter the required short description for the node.
Node Identifier Click button and define an expression in the Expression window for
the Node Identifier. For more information, see Create Expression.
From the Node Attributes grid, select Storage type from the drop-down list.
There are four Storage Types as tabulated.
The following table describes the fields in the Add Node Values window.
Table 54: Fields in the Add Node Values window and their Description
Field Description
This storage type allocates a data cell for the information to be stored in
Data Store the database. The consolidated value of the data is stored in this cell. The
consolidation for the node occurs during the normal process of rollup.
In this storage type, no cell is allocated and the consolidation is done when
the data is viewed. The consolidation for the node is ignored during the
Dynamic Calc
normal process of rollup. The consolidation of node occurs when you use
the OLAP tool for viewing data.
Field Description
In this storage type, a cell is allocated but the data is stored only when the
data is consolidated when viewed, for the first time. The consolidation for
Dynamic Calc & Store
the node is ignored during the normal process of rollup. It occurs only
when you first retrieve the data from the database.
In this storage type, a cell is not allocated nor is the data consolidated. It is
only viewed.
Label Note: The Label storage type is specific to Essbase MOLAP. Storage type is
applicable only for the Regular hierarchy type and Measure. If the user
wants to specify a dynamic calc option at level members in a multi-level
time hierarchy, the same is provided through OLAP execution utility.
Click Save. The Node values are displayed in Add Business Hierarchy window.
3. Click Save in the Add Business Hierarchy window and save the details.
In the Business Hierarchy toolbar, you can also do the following:
Click button to Add subsequent node(s). For the second node or subsequent node, you can
define the Hierarchy Tree and Node Attributes details as explained below.
The following table describes the fields in the Hierarchy Browser pane.
Field Description
Add Hierarchy Node Click button adjacent to Child of field and select the required
Member in the Hierarchy Browser window. Click OK.
Click button by selecting the required Node level checkbox to edit the Node details.
When you have selected Regular - Business Intelligence Enabled Hierarchy option, do the following:
1. Select Total Required checkbox, if you want the total of all the nodes.
2. Select List checkbox to retrieve information from database when queried.
NOTE List hierarchy can have only one level and you cannot select
List option if the Total Required option has been selected. See
List hierarchy.
3. Click button in the Entity field. The Entity and Attribute window is displayed.
You can either search for a specific Entity using the Search field or select the checkbox adjacent
to the required Entity in the Available Entities list. The list of defined Attributes for the selected
entity is displayed Available Attributes list.
You can either search for a specific Attribute using the Search field or select the checkbox
adjacent to the required Attribute in the Available Attributes list.
Click Save. The selected Entity and Attribute is displayed in the Add Business Hierarchy window.
4. Click button from the Business Hierarchy tool bar. The Add Hierarchy levels window is displayed.
Enter the details in Level Details section as tabulated.
Field Description
Short Description Enter the required short description for the level.
Level Identifier Click button and define an expression in the Expression window for the
Level Identifier. For more information, see Create Expression.
Level Description Click button and define an expression in the Expression window for the
Level Description. For more information, see Create Expression.
Click Save. The Level details are displayed in Add Business Hierarchy window.
BI Hierarchy value refresh on On Load property is not functional for data loads performed
through Excel Upload. It is applicable only for data loads which run through a batch process.
5. Click Save in the Add Business Hierarchy window and save the details.
In the Business Hierarchy tool bar, you can also do the following:
• Click button to Add subsequent Levels. For the second or subsequent levels, the levels are
incremented.
• Click button by selecting the required level checkbox to edit the Level details.
When you have selected Regular - Parent Child Hierarchy option, do the following:
1. Click button in the Entity field. The Entity and Attribute window is displayed.
You can either search for a specific Entity using the Search field or select the checkbox adjacent
to the required Entity in the Available Entities list. The list of defined Attributes for the selected
entity is displayed Available Attributes list.
You can either search for a specific Attribute using the Search field or select the checkbox
adjacent to the required Attribute in the Available Attributes list.
Click Save. The selected Entity and Attribute is displayed in the Add Business Hierarchy window.
2. The Business Hierarchy section displays the pre-defined nodes such as Child code, Parent Code,
Description, Storage Type, Consolidation Type, and Formula. You can modify the node values by
doing the following:
Click button from the Business Hierarchy tool bar. The Edit Hierarchy Values window is
displayed.
Click button adjacent to the required node field and define the expression in the Expression
window. For more information, see Create Expression.
Click Save. The node details are displayed in Add Business Hierarchy window.
3. Click Save in the Add Business Hierarchy window and save the details.
1. Click button in the Entity field. The Entity and Attribute window is displayed.
You can either search for a specific Entity using the Search field or select the checkbox adjacent
to the required Entity in the Available Entities list. The list of defined Attributes for the selected
entity is displayed Available Attributes list.
You can either search for a specific Attribute using the Search field or select the checkbox
adjacent to the required Attribute in the Available Attributes list.
Click Save. The selected Entity and Attribute is displayed in the Add Business Hierarchy window.
2. In the Add Business Hierarchy window, select the Hierarchy Type as Measure.
3. Click button in the Entity field. The Entity and Attribute window opens.
A list of all the available entities will be listed under Available Entities. Select the required
entity. The attributes for that entity will be listed under Available Attributes.
Select the required Attribute and click Save. Click Cancel to quit the window without saving.
After saving, the Entity and Attribute will be displayed in their respective fields.
4. Click button from the Business Hierarchy tool bar. The Add Node Values window is displayed.
Enter the details in the Node Details section as tabulated.
The following table describes the fields in the Business Hierarchy too bar.
Table 57: Fields in the Business Hierarchy Tool bar and their Description
Field Description
Short Description Enter the required short description for the node.
Table 58: Fields Business Hierarchy Tool bar and their Description
Field Description
Select Hierarchy Node Click button adjacent to Child of field and select the required
Member in the Hierarchy Browser window. Click OK.
• Click button by selecting the required Node level checkbox to edit the Node details.
2. Select the Time Hierarchy Type from the drop-down list. Depending on the selection, the
Hierarchy Levels are displayed in the Business Hierarchy section.
You can also Edit the required Hierarchy Level. Select the checkbox adjacent to the required Level
and click button.
The Edit Hierarchy Levels window is displayed. You can update Short Description, Level Identifier,
and Level Description details.
3. Specify Hierarchy Start Date by selecting Month and Day from the drop-down list.
4. Click Save and save the Time Hierarchy details.
1. When you select the Ratio option, the window displays a simple ratio of two measures. To define the
relationship as a ratio, double click the first <<Select Measure>> option to open the Select Measure
pop-up.
2. The pop-up displays will display the Measure folder. Double-click the folder to expand the list of
measures under it. Depending on the Information Domain you are logged in to, the measures for
that domain are displayed.
3. Select the measure for which you want to compute the ratio and click OK. To close the pop-up
without saving the selected measure option, click Cancel. Repeat the same procedure to choose the
second measure.
When you select the Ratio as Percentage option, the window displays the ratio percentage of the
selected measures. When you select the Difference option, the value displayed will be the difference
between two selected measures. When you select the Addition option, the summated value of the
selected measures will be displayed. When you select the Percentage Difference option, the
percentage value of the selected measures is computed.
Growth type computed measures are used to calculate the growth of a measure over a certain time
period. The Growth type measures are of two types:
• Absolute – where the growth of a measure can be calculated either in absolute terms i.e. a simple
difference
• Percentage – where the growth of a measure is calculated on a percentage basis.
Absolute Growth Option
1. Select the Absolute Growth option and enter the details as tabulated.
The following table describes the fields in the Absolute Growth Option.
Table 59: Fields in the Absolute Growth Option and their Description
Field Description
Select the period from the drop-down list for which you want the growth to
Select the period
be monitored. The available options are Year, Quarter or month.
2. Select the measure from the Select the Measure pane. Depending on the Information Domain you
are logged in to, the measures for that domain are displayed in the pane. Select the measure from
the pane. On selecting the measure, the growth of the measure will be calculated for the
consecutive period for a year.
Percentage Growth Option
1. Select the Percentage Growth option and enter the details as tabulated.
The following table describes the fields in the Percentage Growth Option.
Table 60: Fields in the Percentage Growth Option and their Description
Field Description
Select the period from the drop-down list for which you want the growth to
Select the period
be monitored. The available options are Year, Quarter or month.
2. Select the measure from the Select the Measure pane. Depending on the Information Domain you
are logged in to, the measures for that domain are displayed in the pane. Select the measure from
the pane. On selecting the measure, the growth of the measure will be calculated for the
consecutive period for a year.
The Time Series type measures are time dependent. The Time Series types are:
• Aggregation type – This option computes the estimate of the periodical performance on a period-
to-date basis.
• Rolling Average – This option computes the average for the previous N values based on the given
dynamic value (N).This dynamic range could vary from a period of three months to any number of
months.
Aggregation Type Option
1. Select the Aggregate option.
2. Select the measure from the Select the Measure pane. Depending on the Information Domain you
are logged in to, the measures for that domain are displayed in the pane.
Rolling Average Option
1. Select the Rolling Average option.
2. Enter the rolling average in the Select the number of periods for which to calculate the rolling
average field.
3. Select the measure from the Select the Measure pane. Depending on the Information Domain you
are logged in to, the measures for that domain are displayed in the pane.
The Advanced computed measures option allows you to specify a formula for computation of the
measure. In order to enter the formula, it is assumed that the user is familiar with MDB specific OLAP
functions.
There are two ways that you can enter a formula.
You can define the function/condition for a measure and/or dimension by entering the expression in the
pane. It is not essential that you select the measure/dimension and the function in the order displayed.
You can select the function and then proceed to specify the parameters, which can be either a measure or
dimension or both.
You can define it by following the procedure mentioned below:
Selecting the Measure
1. Click Insert Measure to open the Select Measure pop-up. The pop-up displays will display the
Measure folder. Double-click the folder to expand the list of measures under it. Depending on the
Information Domain you are logged in to, the measures for that domain are displayed.
2. Click OK to select the measure selection. To close the pop-up without saving the selected measure
option, click Cancel.
Selecting the Dimension
1. Click Insert Dimension to open the Select Dimension pop-up. The pop-up displays will display the
Dimension folder. Double-click the folder to expand the list of dimensions under it. Depending on
the Information Domain you are logged in to, the dimensions for that domain are displayed.
2. Click OK to select the dimension selection. To close the pop-up without saving the selected
dimension option, click Cancel.
Selecting the Function
1. Click Insert Function to open the Select Function pop-up. Double-click the Functions folder to
expand the list of functions within in it. The functions available are those specific to Essbase. The
parameters for the function are displayed in the Parameters pane.
NOTE The functions displayed are based on the OLAP type and
therefore, vary for SQL OLAP and Essbase.
2. Click OK to select the function. To close the pop-up without saving the selected function option,
click Cancel.
2. In the Mapper List window, the Read Only option against the created Map would appear as Y. Now
select the defined Map and click button. The Mapper window is displayed.
3. The Save Mapping and Delete Mapping options are disabled.
4. Select the Node and click on View Mapping. The View mapping window is displayed. The Delete
button is inactive.
5. Click Close to exit the window.
NOTE Starting from 8.1.x.x.x version, refer to MOS Note 2907369.1 for
maintainabilty of the module.
1. DEFQmodule will be supported on an as-is, where-is basis
for the existing features.
2. Bug fixes if any, will be reviewed and fixed based on the
criticality of the issue.
3. Nice to have features, lower severity bugs, and
enhancements will be reviewed but may not be prioritized
and fixed.
• Excel-Entity Mappings
• Excel Upload
5. Select the required Excel file to be used as the template and click button.
The columns in the selected Excel template are listed in the Select Excel Columns grid and the
database tables are listed in the Select Entities grid.
6. Enter the format in which the dates are stored in the excel sheet in the Source Date Format field.
7. Select the Apply to all Dates checkbox if you want to apply the source date format to all date fields
in the excel sheet.
8. Select the First Row is the Header checkbox, if your Excel template has a header row.
9. Select the Template Validation Required checkbox to validate whether the Excel template you use
is same as the Excel sheet you use during the Excel Upload window. The validation is done when
you upload the excel sheet. Error will be displayed if there is any mismatch between the Excel
template you use to map and the actual Excel sheet you upload.
This field is displayed only if you have selected the First Row is the Header checkbox.
10. Select the Bulk Authorization checkbox to assign the “Excel_Name” across the selected column.
For example, the selected column “v_fic_description” will have the Excel Name assigned.
11. Select Save with Authorization checkbox to authorize the data upon successful data load. The
three mandatory fields namely Maker ID, System Date, and Authorization Status are displayed in
the Select Excel Columns grid.
You need to map these fields to the corresponding columns in the Select Entities grid. The value for
Maker ID column is updated with the User ID of the user who is performing the Excel Upload. The
value for Maker Date is updated with the current System Date during which the upload is performed
and the value for Authorization Status is updated with flag 'U'. See Save with Authorization to create
a Form where the uploaded data can be authorized.
12. Select a column from the Select Excel Columns grid and select an attribute or column from the
required table from the Select Entities grid. Click Map.
13. Click Automap. The respective columns with the similar names in the Excel sheet and the database
are mapped. You need to manually map the other columns. The mapping details are displayed in
the Mapping Information grid which facilitates you to edit the details as required.
14. Click Save Mapping.
The Excel-Entity Mapping window displays the excel-database table mapping details.
In the Excel-Entity Mappings window, you can also do the following:
Click button in the Mappings Summary tool bar to View the mapping details.
Click button in the Mappings Summary tool bar to Edit the mapping details.
Click button in the Mappings Summary tool bar to Delete the mapping details.
NOTE You can download the Excel template used in the mapping by
clicking button.
5. Click Upload.
A confirmation dialog is displayed on successful upload and the excel data is uploaded to the
database table. You can click on View Log to view the log file for errors and upload status.
Forms Designer within the Data Entry Forms and Queries section facilitates you to design web based user-
friendly Forms using the pre-defined layouts. You can access DEFQ - Forms Designer by expanding Data
Management Framework and Data Entry Forms and Queries within the tree structure of LHS menu and
selecting Forms Designer.
The DEFQ - Forms Designer window displays a list of pre-defined options to create, modify, and delete
Forms. You can also assign rights and define messages. By default, the option to Create a New Form is
selected and the left pane indicates the total steps involved in the process. The available options are as
indicated below. Click on the links to view the section in detail.
• Creating a New Form
• Altering Existing Forms
• Copying Forms
• Deleting Forms
• Assigning Rights
• Message Type Maintenance
The following table describes the layouts in the DEFQ – Layout window.
Table 61: Layouts in the DEFQ – Layout window and their Description
Layout Description
It displays a single record with its column in a grid format. You can view a
Multi Column Layout multi column layout Form without having to scroll or with minimum
scrolling to view all the columns.
Figure 140: DEFQ – List of Available Tables Selection window (Step 4 of Designing Form)
NOTE You should use tables with names not longer than 25
characters. This is a limitation.
For multiple selections, you can either press Ctrl key for nonadjacent selection or SHIFT key for
adjacent selections. Click Next, and the Fields Selection window is displayed.
5. Select the fields to be joined from the Available Fields list and click . You can press Ctrl key for
multiple selections and also click to select all the listed fields. All mandatory fields are auto
selected and are indicated on the window with an asterisk (*).
Figure 142: DEFQ – Sort Fields Selection window (Step 6 of Designing Form)
You can sort the fields in required order as intended to display in the Data Entry Form. Also the
mandatory fields which needs user inputs are indicated in '*' symbol and are auto selected in the
Selected Fields pane.
Select the field from the Available Fields list and click . You can press Ctrl key for multiple
selections and also click to select all the listed fields.
(Optional) To arrange multiple fields, select Sort by Descending checkbox.
(Optional) Select the Excel Map checkbox to enable Bulk Authorization.
NOTE In case you have selected Excel Map checkbox, you need to
select “Excel Name” from the Store Field As list in the DEFQ
Field Properties window. Only on selection, the
“SelectExcelSheetName” list is displayed for authorizer in the
DEFQ - Data Entry window.
7. Click Next. The DEFQ Field Properties window is displayed with the Form details such as Field
Name, Display Name, In View, In Edit/Add, Allow Add, Store Field as, Rules, and Format Type.
Table 62: Fields in the DEFQ – Field Properties window and their Description
Field Description
Select either Display or Do not Display to display the field in the Form.
If the field is a foreign key field or if more than one table is selected, then
the following options are available in the drop-down list;
In View
Same Field
Alternate Display Field
Do not Display options
Field Description
Specify the edit parameters by selecting from the drop-down list. The
available options depend on the type of field selected.
For normal fields you can select Text Field, Text Area, Select List,
Protected Field, Read Only, and Do Not Show.
For foreign key field s you can select Read Only, Select List, and Do Not
In Edit/Add Show.
For primary key fields you can select Read Only and Do Not Show.
For calendar fields you can select Calendar and Do Not Show.
Note: If you choose Select List option, you need to define the values. For
more information, refer Define List of Values.
Select the required option from the drop-down list. You can select the
store format as Normal, Sequence Generator, Maker Date, Checker Date,
Store field as Created Date, Modified Date Auth Flag, Maker id, Maker Date, Checker id,
Checker Date, Checker Remarks, Maker Remarks, and Excel Name (If Excel
Map is selected in Sort Fields Selection window).
Click Rules and specify Rules and Expressions for the selected field in the
Rules Specifying Rules and Expressions for Data - Validations window.
For more information, refer Applying Rules section in References.
Select the required Format type from the drop-down list depending on the
Format Type field type selected.
CLOB data type is not supported.
Select the checkbox to group all the set of table Forms to a batch.
Batch Commit All the Form tables are executed along with the batch execution and if in
case, a Form in the table fails to execute, the entire set of Forms are
returned.
Click Message Details to define the message type for Creator and
Message Details Authorizer in the Messaging Details for a Form window. For more
information, refer Define Message Details.
8. Click either Save to only save the Form details or click Save for Authorization to save the changes
with authorization. For more details, refer Save for Authorization section.
NOTE Sometimes, on clicking Save, the form does not get saved. This is
because the Java heap size setting for OFSAAI service is set too
high and web server memory setting is too low. Contact System
Administrator to modify it to the appropriate setting by viewing
the log file created in the path:
$FIC_APP_HOME/common/FICServer/logs/.
While saving, the User for Mapping - DEFQ window is displayed which facilitates you to assign user
rights to the Form. For more information, refer Assign Rights.
Display list and click to de-select. You can press Ctrl key for multiple selections and also click
or buttons to select/de-select all the listed fields.
3. Click Next. The Sort Fields Selection Window is displayed.
Sort the fields in required order as intended to display in the Form. You can choose a field from
the list and click or buttons to select/deselect. You can also click or buttons to
select/de-select all the listed fields.
Select a field and click or buttons to arrange fields in the required order.
(Optional) To arrange multiple fields, select Sort by Descending checkbox.
(Optional) Select the Excel Map checkbox to enable Bulk Authorization.
NOTE In case you have selected Excel Map checkbox, you need to
select “Excel Name” from the Store Field As list in the DEFQ
Field Properties window. Only on selection, the
“SelectExcelSheetName” list is displayed for authorizer in the
DEFQ - Data Entry window.
3. Select the required user from Available User List. You can also click or buttons to reload
previous/next set of users in the list.
4. Select the checkbox corresponding to the user permissions such as View, Add, Edit, Delete, or All
Above. You must give View permission in order to allow users to Edit or Delete a Form.
5. Select Authorize or Auto-Authorize checkbox as required.
The Authorize and Auto-Authorize options are applicable for all the forms that have been saved
with the Authorize option. The Auto-Authorize feature for records is applicable in scenarios where
the Creator and Authorizer are the same. If a user has Add and Auto-Authorize permissions, the
data entered by the user is auto authorized and the data will be in Authorized status. In case of
normal Authorization, the Record added by the creator has to be authorized by a different user who
has Authorize permissions.
6. Select the Show Data Created by Current Users Only checkbox if you want the current user to
view data created by him only.
7. Click User Value Map to map users to the form based on data filter.
8. Click Save Access Rights. A confirmation dialog is displayed after saving and the user is added to
the Assigned User List.
NOTE The data type of field/column you select to define filter should
be NUMBER or VARCHAR. The users mapped to the DEFQ form
whose assign rights are authorized through “Forms
Authorization” can save the filter.
There are two types of filters, Global Data Filter and Custom Data Filter.
Global Data Filter: In this filter, the value will be fetched from the DEFQ_GLOBAL_VALUES table of the
Atomic schema, which is automatically created during information domain creation. The table needs to be
populated manually through excel upload. The table contains all the entities and the users mapped to
them.
Custom Data Filter: This filter enables the user to provide a custom filter for the form you design. In this
filter, you should enter values for all the users mapped to the form manually.
To set a Data Filter:
1. Click User Value Map in the DEFQ- Assign Rights window.
The User Value Map window is displayed.
2. Select the Global Data Filter option to filter the data globally.
Select the field based on which the data should be filtered and displayed for the user, from the
Fields to Display section.
NOTE Normally the user can access all the data from the table
whenever the DEFQ form is created. Based on this filter, the
user will be displayed only the data which is mapped to him.
3. Select the Custom Data Filter to provide a custom filter for a specific DEFQ Form.
Select User ID from the drop-down list and enter Values for that user. It is mandatory
4. Click Save.
2. Select the message category from the Message Type drop-down list.
3. Edit the message details by doing the following:
The defined Message Subject and Message Content is auto populated. Edit the details as
required.
Add or remove the defined recipients. Double-click on the required member to toggle between
Available and Mapped Recipients list.
Forms Authorization within the Data Entry Forms and Queries section of Infrastructure system facilitates
you to view and authorize / approve any changes that are made to the privileges assigned to a user in a
particular Form.
You need to have FRMAUTH function role mapped to access Forms Authorization window.
You can access Forms Authorization window from the left hand side (LHS) menu of Infrastructure Home
Page. Click “+” and expand the Data Model Management and select Data Entry Forms and Queries.
The Forms Authorization window displays the list of privileges assigned to a user in different Forms. These
privileges include create, view, modify, delete, authorize, and auto-authorize records. The Forms
Authorization window allows you to select a user from the drop-down list adjacent to User ID field. This
field displays the User ID’s associated with the selected Information Domain.
On selecting a user from the User ID field, the columns in Forms Authorization window lists the grants
requested for that user on different Forms as listed below.
The following tables describes the columns in the Forms Authorization window.
Table 63: Column Names in the Forms Authorization window and their Description
Application Lists the specific application to which the Form has been assigned.
Access Rights Before Displays the available Right Requests for the selected user in the Form.
Note: For new Form, the column remains blank.
Access Rights After Displays the Right Requests raised for authorization.
DV - DEFQ VIEW
DA - DEFQ ADD
DE - DEFQ EDIT
DD - DEFQ DELETE
A - AUTHORIZE
DU - AUTO AUTHORIZE
S - SHOW DATA CREATED BY CURRENT USER ONLY
Created By Displays the USER ID from which the Right Request has been created.
Created Date Displays the Date on which the Right Request has been created.
Last Saved By Displays the USER ID from which the previous Right Request change has
been saved.
Last Saved Date Displays the Date on which the previous Right Request change has been
saved.
Checked By Displays the USER ID from which the Right Request has been authorized.
Checked Date Displays the Date on which the Right Request has been authorized.
Data Entry within the Data Entry Forms and Queries section of Infrastructure system facilitates you to
view, add, edit, copy, and delete data using the various layout formats and Authorize/Re-authorize data
records based on the permissions defined during the Form creation.
You can use the Search option to query the records for specific data and also export the data in Microsoft
Excel format for reference. You can launch multiple instances of Data Entry window using the URL to
search and update records simultaneously.
You can access DEFQ - Data Entry by expanding Data Entry Forms and Queries section of Data Model
Management module within the tree structure of LHS menu.
The DEFQ - Data Entry window displays the list of Data Entry Forms and Query Forms mapped to the
logged-in user in the LHS menu. You can select the required Form to view the details. In the DEFQ - Data
Entry window, you can do the following:
• Viewing Form Details
• Editing Form Details
• Adding Form Data
• Authorizing Records
• Exporting Form Data
• Copying Form Data
• Deleting Form Details
NOTE The Roll Back option can be used only for authorized records
i.e. after the records are edited and saved, you can roll
back/undo the changes in view mode.
The following table describes the Layouts in the DEFQ – Data Entry window.
Table 64: Layouts in the DEFQ – Data Entry window and their Description
Layout Description
To view a single record details at any given point. You can use the navigation buttons to
Single Record
view the next record in the table.
To view and edit a single record. A list of five rows/records is displayed by default, and
the same can be changed by entering the required number in Display Rows. You need
Editable View
to select the required record from the list to view/edit and click Save to update the
changes.
To view all the records in a list. A list of five rows/records is displayed by default, and
Grid (Default) the same can be changed by entering the required number in Display Rows. You can
click on the column header to alphabetically sort the list of records in the table.
To view all the columns of a selected record. This layout enables you to view a record
Multi column
without having to scroll or with minimum scrolling to view all the columns.
To view all the rows of a selected record. This layout enables you to view a wrapping
Wrapped rows
row easily without having to scroll horizontally to view the columns.
1. Click .
The search fields are displayed.
2. Select Field Name from the drop-down list.
3. Enter the value/data in the Search field.
4. Click Go.
The search results are displayed in the list.
To perform an Advanced search in the DEFQ - Data Entry window:
2. Select the required Parentheses/Join, Field, Operator from the drop-down list and enter the Value
as required to query the Form data.
3. Click GO.
The results are displayed with the field names containing the searched data.
2. Select a record to be edited and click . The editable fields are enabled.
3. Enter/update the required details.
4. Click Save and update the changes.
5. If required, you can click Reset to undo the changes and return to original field values.
If you have edited an Authorized record, the same is again marked for authorization. Once the
record is updated, a modified status flag is set, and only these record changes can be rolled back.
The Roll Back option is supported in view mode only for authorized records, i.e. records which are
updated and saved.
The status of each record in the table is indicated with an “AuthFlag” as indicated below:
• Unauthorized records are displayed with the status flag “U”
• Authorized records are displayed with the status flag “A”.
• Rejected records are displayed with the status flag “R”.
• Modified records are displayed with the status flag “M”.
• Deleted records are displayed with the status flag “D”.
• If an Unauthorized record is on Hold, the status flag is displayed as “H”.
• If a Modified record is on Hold, the status flag is displayed as “X”.
• If a Deleted record is on Hold, the status flag is displayed as “Z”.
To Authorize Data in the DEFQ - Data Entry window:
You can also do a Bulk Authorization if Excel Map is selected in the Sort Fields Selection window.
Select the mapped Excel Name from the “SelectExcelSheetName” drop-down list.
The DEFQ - Data Entry window displays only those records which are uploaded though the selected
Excel sheet. Click Authorize Excel.
A confirmation dialog is displayed. Click OK.
You can Reject / Hold a record by doing the following:
• To Reject a record, select the checkbox in the “Rej” column adjacent to the required record and
click Save.
A confirmation dialog is displayed. Click OK.
You can also Reject records in Bulk Mode if Excel Map is selected in the Sort Fields Selection
window. Select the mapped Excel Name from the “SelectExcelSheetName” drop-down list.
The DEFQ - Data Entry window displays only those records which are uploaded though the selected
Excel sheet. Click Reject Excel. A confirmation dialog is displayed. Click OK.
• To Hold a record and to authorize or reject at a later point, select the checkbox in the “Hold”
column adjacent to the required record and click Save.
In the DEFQ - Data Entry window, you can also do the following:
• Click Authorize All and click on Save to authorize all the records displayed in current page.
• Click Reject All and click on Save to reject all the records displayed in current page.
• Click Hold All and click on Save to hold all the records displayed in current page.
If you have enabled the option to send alerts to the Creator of the Form in Message Type Maintenance
window, a message is sent indicating that the records are authorized/rejected/put-on-hold.
The list of available records with the Authorization status is displayed. If there are “no records” for
Authorization in the selected Information Domain, an alert message is displayed.
2. Click Reauthorize Records. The DEFQ Authorization Window is displayed.
3. Select the “Auth” checkbox adjacent to the required record.
4. Click Save. On re-authorization, a confirmation message is displayed.
You can also select the checkbox adjacent to “Rej” to reject the record, or “Hold” to re-authorize or
reject at a later point. A message is sent to the Form creator indicating that records are
authorized/rejected/put-on-hold.
1. In the View mode, select the checkbox adjacent to the record(s) which you want export.
6.4.9 References
This section of the document consists of information related to intermediate actions that needs to be
performed while completing a task. The procedures are common to all the sections and are referenced
where ever required. You can refer to the following sections based on your need.
1. Dimension Table Selection: Enter the Root Name and select the Table. Click Next.
2. Fields Selection: Select required Fields to Display from Available fields and click Next.
3. Dimension Node Selection: Select Field Nodes from Available fields and click Next.
4. Select Dimensional Tree Nodes for the selected fields and click Next.
5. DEFQ Field Properties window: Specify the required details. For more information, refer DEFQ
Field Properties.
NOTE If the option is not selected, a single mail is sent for the entire
batch. Message details such as recipients, subject, and contents
are fetched from the metadata
2. Select the required Available Message Types from the list and click .
3. Select the Message Type from the drop-down list based on specific action.
4. Select Specific Messages Required to add a specific message.
5. Select Available Fields for Subject, Content, & Recipients from the list and click .
6. Click Save and save the messaging details. You also need to select Save with Authorization in the
DEFQ Field Properties window for the messages to be functional.
To authorize the uploaded data, you need to create a Form in DEFQ with the Save with Authorization
checkbox selected.
1. Before any DEFQ Form is created to authorize the data, the underlying table in the data model
needs to have below columns added to its table structure. You need to perform a data model upload
to have the new structures reflected in the application.
Columns required:
V_MAKER_ID VARCHAR2(20),
V_CHECKER_ID VARCHAR2(20),
D_MAKER_DATE DATE,
D_CHECKER_DATE DATE,
F_AUTHFLAG VARCHAR2(1),
V_MAKER_REMARKS VARCHAR2(1000),
V_CHECKER_REMARKS VARCHAR2(1000)
2. Navigate to Create a New Form in the Forms Designer section and complete the design steps up to
Step 6. From the DEFQ Field Properties window explained in step 7, select the appropriate values as
listed below for Store Field As depending on the columns selected:
V_MAKER_ID - MakerID
V_CHECKER_ID – CheckerID
Topics:
• Prerequisites
• Creating Forms Definition
• Approving or Rejecting Forms Definition
• Data Entry
7.1 Prerequisites
The following is the prerequisite to access and perform functions in the DMI user interface:
• Mapping DMI Menu into Application Menu Tree
• User Role Mapping and Access Rights
NOTE A user who creates a Form would not be able to perform Data
Entry even if the same user provides the required permissions
while creating a Form.
For example, consider if the user AAIUSER is creating a Form
to modify the product data. If this user provides permission to
perform data entry while creating a Form, the same user would
not be able to perform this operation from Data Entry and the
following error is displayed:
“Maker cannot Approve/Reject the Record”
You can also auto-approve Forms that are created using the Designer option. When you auto-approve the
Form, the user performing the Data Entry does not have to send the entries for an approval cycle.
Field Description
Auto Map This option is enabled when the Type is selected as Excel
Entities Upload. Select this box to auto map the attributes in the
Excel file with the attributes in the Entity Table.
6. Click Drag and Drop, and select the excel file to update the required table.
TIP You can also drag and drop the required excel file in the Drag
and Drop Field.
9. Select Enable Bulk Authorization if you want to enable the bulk authorization of all the records
when you edit an approved Form from Data Entry.
10. Click Apply.
The Mapped Attributes Tab is displayed.
11. Click the table in the Entity Name field.
The source attributes from the table and the mapped attributes from the Excel file are displayed.
If the selected table has Child tables, the Child tables that you select from the Mapped Entities tab
are also displayed in the Attributes tab. You can configure the attributes for the master table and
its child tables here.
12. Click the required mapping in the Override Mapping Column and enter the required attribute name
if you want to change the default mapping.
13. Select Participate in Data Security if you want to configure a specific condition. The condition that
you configure is applicable when a user performs the data entry for the table records for each
approved Forms Definition from the Data Entry Page.
For example, consider that you configure the condition DIM_ACCOUNT_COUNTRY_NAME =
‘INDIA’ for the reference table DIM_Account. When a user performs the data entry for this
Forms Definition from the Forms Definition - Summary Page and enters a country name other
than ‘INDIA’, the record gets rejected by the application when another user approves this record.
Complete the following steps if you want to configure Data Security for the Forms Definition:
a. Select the check box in the Participate in Data Security Column.
e. Click Apply.
The filter condition is applied to the selected mapping.
14. Click User Security to select the user or user groups who can perform data entry to maintain the
data in the table.
The User Security Pane is displayed.
15. Enter the required user group or user to assign permissions from the Map Users / Groups Field.
When you select the user group or user, the permissions for each approved Forms Definition are
displayed. These permissions are the actions that the selected user group or user can perform while
performing Data Entry.
16. Select the following permissions in the Map Users / Groups Pane that the user group or user can
perform for each Forms Definition from the Data Entry Page:
Option Description
Duration From Optional. Select the start date for which the permissions
are available to the user or user group.
Duration To Optional. Select the end date for which the permissions
are available to the user or user group.
NOTE If you select a user group for User Security, you can view the
users mapped to that particular group by clicking the icon.
17. Select DMI_WorkFlow from the Workflow drop-down list if you want the process of data entry to
go through the Process Modelling Framework (PMF) workflow that you have already created.
For information on PMF workflows, see the OFSAAI PMF Orchestration Guide.
18. Click Save as Draft if you want to save the Forms Definition in draft format.
19. Click Submit if you want to submit the Forms Definition for approval.
The Forms Definition is created and is displayed in Forms Definition - Summary in Awaiting
status.
NOTE You can select up to six Child tables only for each Master table.
7. Select Enable Bulk Authorization, if you want to enable the bulk authorization of records while
performing data entry.
8. Click Apply.
The Attributes Tab is displayed.
9. Click the table name in the View Name field.
The attributes in the entity table are displayed.
If your table has Child tables, the Child tables that you select from the Entities tab also gets
displayed in the Attributes tab.
10. Select the attributes for which you want to modify the data from the Attribute Name field.
11. Select Participate in Data Security if you want to configure a specific condition. The condition that
you configure is applicable when a user performs the data entry for the table records for each
approved Forms Definition from the Data Entry Page.
For example, consider that you configure the condition DIM_ACCOUNT_COUNTRY_NAME =
‘INDIA’ for the reference table DIM_Account. When a user performs the data entry for this
Forms Definition from the Forms Definition Definition Summary Page and enters a country name
other than ‘INDIA’, the record gets rejected by the application when another user approves this
record.
Complete the following steps if you want to configure Data Security for the Forms Definition:
a. Select the check box in the Participate in Data Security column against the required mapping.
e. Click Apply.
The filter condition is applied to the selected mapping.
12. Complete the following steps if you want to add filters to the Forms Definition:
c. Click Apply.
The filter is displayed in the Filter Condition Field.
For example, consider the table dim_product_book that has the column v_product_code. The
column has values ranging from 1 to 500. If you want to view or modify the records that have
values less than 118 for the column v_product_code, you can create the following expression
using the Filter Condition pop-up:
15. Click User Security to select the user or user groups who can perform data entry to maintain the
data in the table.
The User Security Pane is displayed.
16. Enter the required user group or user to assign permissions from the Map Users / Groups field.
When you select the user group or user, the permissions for each approved Forms Definition are
displayed. These permissions are the actions that the selected user group or user can perform while
performing Data Entry.
17. Select the following permissions in the Map Users / Groups Pane that the user group or user can
perform for each Forms Definition from the Data Entry Page:
Permission Description
Duration From Optional. Select the start date for which the permissions
are available to the user or user group.
Duration To Optional. Select the end date for which the permissions are
available to the user or user group.
NOTE If you select a user group for User Security, you can view the
users mapped to that particular group by clicking the icon.
18. Select DMI_WorkFlow from the Workflow drop-down list if you want the process of data entry to
go through the Process Modelling Framework (PMF) workflow that you have already created.
For information on PMF workflows, see the OFSAAI PMF Orchestration Guide.
19. Select Auto Approve, if you do not want to the Forms Definition through the PMF workflow. When
you select this option, the Forms Definition is automatically approved from Forms Definition –
Summary and is available for Data Entry. A user with the required role can then perform the data
entry without the need for an approval process. For more information, see Data Entry for Forms
Created using the Auto-Approve Option.
20. Click Save as Draft if you want to save the Forms Definition in draft format.
21. Click Submit if you want to submit the Forms Definition for approval.
The Forms Definition is created and is displayed in the Forms Definition - Summary Page in
Awaiting status. If you have auto-approved it, the Form is ready for Data Entry.
NOTE:
Only values that are already seeded in the Database table, are displayed in the Placeholder drop-down list.
NOTE:
For Language Placeholder the default locale language is displayed and cannot be modified.
3. Click Add. to add a new Filter expression. You can add multiple Filter expressions to the same filter.
The filter is added to the list of filters.
Mouse-over the place holder filter, to view more details about the filter.
NOTE:
Data Security information must be configured for each attribute name, separately.
When you select the user group or user, the permissions for each approved Forms Definition are
displayed. These permissions are the actions that the selected user group or user can perform while
performing Data Entry.
Field Description
Duration From Optional. Select the start date for which the permissions are available to
the user or user group.
Duration To Optional. Select the end date for which the permissions are available to
the user or user group.
NOTE:
If you select a user group for User Security, you can view the users mapped to that group by clicking the Users icon.
1. Click Menu Button in the Forms Definition that is in Draft status, and then select Edit.
3. Click Menu Button in the Forms Definition that is in Awaiting status, and then click Approve.
The Configure Page is displayed.
4. Click Approve and then enter the required description for the approval in the Comments Field.
5. Click Submit.
The Forms Definition is approved and is displayed in the Data Entry Page as an entry.
1. Click Menu Button in the Forms Definition that is in Awaiting status, and then click Reject.
The Configure Page is displayed.
2. Click Reject and then enter the required description for the approval in the Comments Field.
3. Click Submit.
The Forms Definition is rejected and is displayed in Forms Definition – Summary in Draft status.
You can then edit the Forms Definition in draft status and submit it for approval again.
For more information on editing a Forms Definition, see Editing a Forms Definition.
7.8.2 Data Entry for Forms created without Auto- Approve option
7.8.2.1 Data Entry for Forms created using the Designer Option
If the Forms Definition is created by using the designer option, the user with the necessary role can enter
the values for the table records as per the configuration in the Forms Definition. This user can also add or
delete records. These records are then submitted for approval to another user with the necessary role.
1. Click Menu button in the required Forms Definition from the Data Entry Page.
2. Click Edit.
The Entity Details Page is displayed.
3. Select Ready from the Status drop-down list.
The entity records that are ready for entering data are displayed.
4. Click the Edit icon on the record for which you want to enter data.
An edit pane is displayed.
5. Enter the values in the attributes that you want to modify, and click OK.
The status of the modified record is changed from Ready to Draft. You can repeat the steps for all
the records for which the data needs to be entered.
7. Select the required record and select Delete if you want to remove records that are in draft
status.
8. Click the modified record in draft status, and then click Send for Approval .
The record is sent for approval and is changed to Awaiting status. A user with the necessary role can
approve these records. For more information, see Approving and Rejecting Records after Data Entry.
ATTENTION If the user has configured the Participate In Data Security option
while creating a Forms Definition, you must enter the value as per
the configured condition. If you enter a value that does not meet
the condition, then the record is rejected by the application and the
approval gets failed. You can view the details of the rejection by
using the Audit trail option for each record.
For information on the Participate In Data Security option, see
Creating Forms Definition Using Designer.
1. Click Menu button in the required Forms Definition from the Data Entry Page.
2. Click Edit.
The Entity Details Page is displayed.
3. Select Draft from the Status drop-down list.
The entity records that are ready for entering data are displayed.
4. Click the Actions button corresponding to the record for which you need to add, delete or download
a supporting document.
5. Select Attach, to open the Supporting Documents pane.
6. Click Drag and Drop, to attach a file.
You can attach only specific file formats. For more information, refer to Configure Document
Upload File Formats and Size.
A confirmation message is displayed, after the file is attached and the new file is added to the
supporting documents list.
7. Click Comments button corresponding to a document to add valid reasons and the document
details.
8. Click Download button, to download the supporting document.
9. Click Delete button, to remove the supporting document.
7.8.2.3 Data Entry for Forms created by using the Excel option
When a Forms Definition created by using an Excel file is approved from Forms Definition – Summary,
the table records in the selected table are modified by the data in the Excel file. These records are in
Awaiting status for the approved Forms Definition in Data Entry. You can verify the records modified by
the Excel file records and approve them if you are assigned to the necessary role. If the records modified
by the Excel file are incorrect, you can reject the records. The status of the rejected records is changed to
Draft. A user with the necessary role can edit the records in draft status and submit them for approval
again.
• To approve records, see Approving a Record.
• To reject records, see Rejecting a Record.
• To edit a record in draft status, see Editing a Rejected Record.
1. Click Menu button in the required Forms Definition from Data Entry.
2. Click Edit.
The Entity Details Page is displayed.
3. Select Awaiting from the Status drop-down list.
The entity records that are waiting for final approval are displayed.
5. Enter the required comment in the Comments Field, and then click Approve .
A unique code is sent to the registered email ID of the authorized user who has logged in to approve
the valid records. The unique code is valid only for 90 seconds. If you have not received the unique
code or the unique code is invalid, click Resend in the unique code entry page, to receive a new
code.
6. Enter the unique code and click Submit.
The record is approved successfully with the values from the Excel file.
You can reject the modified record if the modified records are incorrect and if you have the necessary role
assigned to you. A different user with the necessary role can then modify the records again, and then
submit the record for approval.
Before you Begin
The role DMIDATREJ (Data Entry Reject) must be assigned to you if you want to reject a record for Data
Entry.
Procedure
1. To reject a record in Awaiting status, perform the following steps: Click Menu button in the
required Forms Definition from the Data Reporting - Data Entry Page.
2. Click Edit.
The Entity Details Page is displayed. The modified records that are awaiting for the final approval
are displayed here.
3. Enter the required comment in the Comments field, and then click Reject .
A unique code is sent to the registered email ID of the authorized user who has logged in to approve
the valid records. The unique code is valid only for 90 seconds. If you have not received the unique
code or the unique code is invalid, click Resend in the unique code entry page, to receive a new
code.
4. Enter the unique code and click Submit.
The record is rejected, and the status is changed to Draft. A user with the necessary role can now
edit the record.
You can edit the records that are in draft status and send them approval to the user with the necessary
role.
Before you Begin
The role DMIDATWRTE (Data Entry Write) must be assigned to you if you want to edit a record for Data
Entry.
Procedure
To edit a record, perform the following steps:
1. Select Draft from the Status drop-down list.
ATTENTION If the user has configured the Participate In Data Security option
while creating a Forms Definition, you must enter the value as per
the configured condition. If an incorrect value is entered, the record
gets rejected by the application and the approval is failed. You can
view the details of the rejection by using the Audit Trail option
for each record.
For information on the Participate in Data Security option, see
Creating Forms Definition Using Excel Upload.
7.8.3 Data Entry for Forms Created using the Auto-Approve Option
You can perform the Data Entry for auto-approved Forms without the need for approval from a
different user. For more information on auto approving Forms, see Creating Forms Definition.
Forms that are auto-approved would not have the Approve and Reject buttons in Data Entry.
If you have the required permission and role are assigned to you, then perform the data entry, and use
the Submit with Auto Approve option to submit and modify the table data.
You can also drag and scale the white bar to view the history of changes based on months, days, or
hours.
You can select all the records together by selecting the check box against the Status Field. You must
enable Bulk Authorization while creating a Forms Definition for this option to appear in Data Entry.
For more information on this, see Creating Forms Definition Using Excel Upload or Creating Forms
Definition Using Designer.
2. Click Attribute Selection tab, to review the values and the filters and modify if required. You
can also use the default values for export.
3. Click Data Preview, to view the form based on the selected table, columns and the set filter
attributes.
4. To export the report, complete one of the following steps:
Click Export CSV to export the report in CSV format.
Select the File Format as CSV or JSON and click Export.
A confirmation message is displayed after the export is completed, and the Data Entry
Summary is displayed.
5. To download an exported report, click Action and click Status.
The Data Exporter Status page with the list of all the reports that are exported is displayed.
Click Download to save the report to the local directory.
Click Download Link to copy the link. You can paste the link in a Web browser and
download the CSV report to the local directory.
Click Delete to delete the exported report.
• The Folder selector window behavior is explained in the User Scope section.
Hierarchy Member Security
• For each information domain, a default security mapper can be set. Based on this mapper
definition, the Hierarchy Browser window will be displayed.
• In the Hierarchy Browser window, the members that are mapped to your user group are enabled
and can be used. However, you can view the members that are not mapped, but you cannot use
it since they are disabled.
• If a child hierarchy is mapped and the parent is not mapped to your user group, the parent will
be displayed as a disabled node.
• For all AMHM hierarchies, the corresponding Business Hierarchy is created implicitly. Thus, you
can view and use AMHM hierarchies in the RRF framework, provided they are mapped to your
user group.
• Hierarchy member security is applied only for Source hierarchies. No security is used for Target
hierarchies, Rule Condition, Run Condition, and Process Condition.
8.2 Rule
Financial institutions require constant monitoring and measurement of risk in order to conform to
prevalent regulatory and supervisory standards. Such measurement often entails significant
computations and validations with an organization’s data. Data must be transformed to support such
measurements and calculations. The data transformation is achieved through a set of defined rules.
The Rules option in the Rules Run Framework provides a framework that facilitates the definition and
maintenance of a transformation. The metadata abstraction layer is used in the definition of rules
where the user is permitted to re-classify the attributes in the data warehouse model, thus
transforming the data. The underlying metadata objects such as Hierarchies, which are non-large or
non-list, Datasets, and Business Processors drive the Rule functionality. An authorizer must approve
the actions like creation, modification, copying, and deletion of a Rule for them to be effective.
The Rule window displays the rules created in the current Information Domain with the metadata
details such as Code, Name, Description, Type, Folder, Dataset, Version, and Active status. For more
information on how object access is restricted, see Object Security.
You can search for specific Rules based on Code, Name, Folder, Dataset, Version, Active status, or
Type. The Folder drop-down list displays all public folders, shared folders to which your user group is
mapped, and Private folders for which you are the owner. The Pagination option helps you to manage
the view of existing Rules within the system. You can also click Code, Name, Description, Type, Folder,
Dataset, Version, or Active tabs to sort the Rules in the List grid either in ascending or in descending
order.
The Roles mapped for the Rule module are Rule Access, Rule Advanced, Rule Authorize, Rule Read
Only, Rule Write, and Rule Phantom. Based on the roles mapped to your user group, you can access
various screens in the Rule module. For more information, see Appendix A.
Component Description
Dataset This is a set of tables that are joined together by keys. A dataset must have at
least one FACT table. The values in one or more columns of the FACT tables
within a dataset are transformed with a new value.
Source This component determines the basis on which a record set within the dataset
is classified. The classification is driven by a combination of members of one
or more hierarchies. A hierarchy is based on a specific column of an
underlying table in the data warehouse model. The table on which the
hierarchy is defined must be part of the selected dataset. One or more
hierarchies can participate as a source as long as the underlying tables on
which they are defined, belong to the selected dataset.
Target This component determines the column in the data warehouse model that will
be impacted by an update. It also encapsulates the business logic for the
update. The identification of the business logic can vary depending on the
type of rule that is being defined.
Mapping This operation classifies the final record set of the target that is to be updated
into multiple sections. It also encapsulates the update logic for each section.
The logic for the update can vary depending on the hierarchy
member/business processor used. The logic is defined through the selection
of members from an intersection of a combination of source members with
target members.
Node Identifier This is a property of a hierarchy member. In a Rule definition, the members of
a hierarchy that cannot participate in a mapping operation are target
members, whose node identifiers identify them to be an ‘Others’ node, ‘Non-
Leaf’ node or those defined with a range expression. Source members, whose
node identifiers identify them to be ‘Non-Leaf’ nodes, can also be mapped.
For more information on Hierarchy properties, see Defining Business
Hierarchies.
1. Click New button from the toolbar in the Rule window. The Rule Definition (New Mode)
window is displayed.
2. From the Linked to pane, click in the Folder field. The Folder Selector dialog is displayed.
The folders that are mapped to your user group are displayed.
a. Select the checkbox adjacent to the required folder. Click OK.
b. Click New from the List toolbar to create a new folder/segment. For more information,
see Segment Maintenance.
3. From the Linked to pane, click in the Dataset field. The Dataset Selector dialog is displayed
with the list of datasets available under the selected information domain.
a. Select the checkbox adjacent to the required Dataset name and click OK.
Table 70: Field Names in the Master Information pane and their Descriptions
Enter a valid code for the rule. Ensure that the rule code is alphanumeric
Code with a maximum of 30 characters in length and there are no special
characters except underscore “_”.
Enter a valid name for the rule. Ensure that Rule Name is alphanumeric and
Name
does not contain any of the following special characters: #, %, &, +, ", and ~.
By default, the version field is displayed as <<NA>> for the new rule being
created. Once the rule definition is saved, an appropriate version is assigned
Version
as either -1 or 0 depending on the authorization permissions. For more
information, see Rule Definition Versioning.
By default, the Active field is displayed as <<NA>> for the new rule being
created. Once the rule definition is saved, the status is set to Yes if you are
Active
an Authorizer creating the rule or No if the created rule needs to be
authorized by an Authorizer.
Select the Type based on which you want to create the rule from the drop-
Type
down list. The options are Computation and Classification.
5. Click in the Master information pane to edit the properties of the Rule definition. The
Properties window is displayed.
Data in the Query Optimization Settings pane are derived from the global properties (if defined)
in the Optimization tab of System Configuration > Configuration window. However, some
options defined in Global Preferences precede the Rule level properties that you define here.
The following table describes the fields in the Query Optimization Settings pane.
Table 71: Field Names Query Optimization Settings pane and their Descriptions
Properties
By default, this field displays the last change done to the Rule
Last operation type definition. While creating a Rule, this field displays the operation
type as Created.
Pre processing
This field refers to the pre-compiled rules that are executed with
the query stored in the database. While defining a rule, you can
make use of the Pre Built Flag to fasten the rule execution process
by making use of existing technical metadata details wherein the
rule query is not rebuilt again during Rule execution.
Select the required option from the drop-down list.
By default, Pre Built Flag status is set to No. This indicates that the
Pre Built Flag query statement is formed dynamically retrieving the technical
metadata details.
If the Pre Built Flag status is set to Yes, then the relevant metadata
details required to form the rule query are stored in the database on
saving the rule definition. When this rule is executed, the database
is accessed to form the rule query based on stored metadata
details, thus ensuring performance enhancement during rule
execution. For more information, see Significance of Pre-Built Flag.
Specify the SQL Hint that can be used to optimize Merge Query.
For example, “/*+ ALL_ROWS */”
In a Rule Execution, Merge Query formed using definition level
Merge Hints Merge Hint precede over the Global Merge Hint Parameters defined
in the Optimization tab of System Configuration > Configuration
window. In case, the definition level Merge Hint is empty/ null,
Global Merge Hint (if defined) is included in the query.
Specify the SQL Hint that can be used to optimize Merge Query by
selecting the specified query.
For example, “SELECT /*+ IS_PARALLEL */”
Select Hints In a Rule Execution, Merge Query formed using definition level
Select Hint precede over the Global Select Hint Parameters defined
in the Optimization tab of System Configuration > Configuration
window. In case, the definition level Select Hint is empty/null,
Global Select Hint (if defined) is included in the query.
6. Click OK. The properties are saved for the current Rule definition.
NOTE In order to access the Filter Selector window and to select the
pre-defined filters, you must have the FILTERRULE function
mapped to your user role.
1. Click Selector from the List grid and select Filter. The Filter Selector window is
displayed.
In case of Hierarchy and Data Element Filter, the List pane of the Filter Selector window displays
all members based on the selected Information Domain and Dataset. Filtering based on Dataset
is not supported for other Filters like Group, Hierarchy, and Attribute.
2. Select any of the following filters from the drop-down list in the Search in pane:
The following table describes the Member Types in the Search pane.
Table 72: Member Types in the Search pane and their Descriptions
Hierarchy Hierarchy refers to the defined Business Hierarchies and lists all the UAM
Hierarchies (can be implicitly created UAM hierarchies for AMHM hierarchy)
pertaining to the selected dataset.
Filter-Data Element Data Element Filter is a stored rule that expresses a set of constraints. Only
columns that match the data type of your Data Element selection are offered in
the Data Element drop-down list.
Filter-Hierarchy Hierarchy Filter allows you to utilize rollup nodes within a Hierarchy to help
you exclude (filter out) or include data within an OFSAA rule.
Filter-Group Group Filters can be used to combine multiple Data Element Filters with a
logical "AND".
Filter-Attribute Attribute Filters are created using defined Attributes. Attribute filters facilitate
you to filter on one or more Dimension Type Attributes.
In the Filter Selector window, you can perform the following actions:
To search based on a specific member type, select it from the drop-down list and click .
You can also modify your search criteria by specifying the nearest keyword in the like field.
NOTE The re-ordering of hierarchies does not affect the resulting SQL
query.
1. Click Selector button from the List grid and select Source. The Hierarchy Selector
window is displayed.
The LHS pane of the Hierarchy Selector window displays the available hierarchies under the
selected Information Domain and Dataset.
2. Select the checkbox adjacent to the Hierarchies you want to select as Source.
Select the hierarchy and click or button to re-arrange the order of hierarchies.
1. Click Selector from the List grid and select Target. The Measure Selector / Hierarchy
Selector window is displayed.
The Measure Selector and Hierarchy Selector windows are displayed depending on the type of
the Rule you have selected, i.e. the Computation Rule and Classification Rule respectively.
The LHS pane of the Measure Selector / Hierarchy Selector window displays the available
Measures / Hierarchies under the selected Information Domain and Dataset.
2. Select the checkbox(s) adjacent to the members you want to select as Target.
3. Click to move the selected measures to the Selected Measures / Selected Hierarchies pane.
Click to remove selected measures from the Selected Measures / Selected Hierarchies
pane.
4. Click OK. The selected members are listed in the Rule Definition (New Mode) window.
In the List grid you can also:
From the Rule Condition grid, you can apply conditions for each of the BMM hierarchy filters.
NOTE In the case of Data Element, Group, or Hierarchy filters, you can
only view the SQL query.
To apply a condition for a BMM hierarchy filter and view the SQL query in the Rule Condition grid:
1. Click button adjacent to the filter details. The Hierarchy Browser window is displayed.
You can select (pagination) icon to view more options under the available member.
2. Select a member/node and click to select the same. Click to select the member as Self,
Self & Descendants, Self & Children, Parent, Siblings, Children, Descendants, or Last
Descendants. For more information, see Hierarchical Member Selection Modes.
In the Hierarchy Browser window you can also:
Click to focus only on the selected branch. The Available Values pane shows the
Click to display member's numeric codes on the right. The icon changes to .
Click to display member's numeric codes on the left. The icon changes to .
Click to show only member names. This is the default view. The icon changes to .
Click to display member's alphanumeric codes on the right. The icon changes to .
Click to display member's alphanumeric codes on the left. The icon changes to .
Click to display only member names. This is the default view. The icon changes to .
Select a member and click or to re-arrange the members in the Selected Values
pane.
Select a member and click to move it to the top or click to move it to the
bottom.
Click to launch the Search panel. Here you can search based on Dimension Member
Numeric Code, Dimension Member Name or Dimension Member Alphanumeric Code.
You can also search in the grid based on member name using the Search field.
3. Click to view the filter details. The Preview SQL Query window is displayed with the
resultant SQL query. 0.
8.2.2.5 Select Hierarchy Members of Source Hierarchy and Move Source to Slicer
The selected Source and Target Hierarchies are displayed under the Combination Mapper pane. You
can move the source Hierarchies from the Combination Mapper pane to Slicer.
To move a source Hierarchy from the Combination Mapper pane to the Slicer pane:
1. Click the Hierarchy member and drag it to the Slicer pane.
2. Click to select the members of a Hierarchy. The Hierarchy Browser window is displayed.
Whenever a Source/ Target hierarchy is selected, by default, the root node will appear in the
Selected Members pane without checking hierarchy member security.
NOTE The Hierarchy members that are mapped to your user group
are enabled to be used; those that are not mapped are
disabled.
For more details on the Hierarchy Browser window, see Hierarchy Browser.
4. Select the checkbox adjacent to the member name and click OK. 0.
NOTE If you are not able to view the Combination Mapper pane
properly due to resolution issues, click Collapse View in the
Map toolbar.
1. Click adjacent to the Measure displayed under the Target Page column. The Business
Processor Selector window is displayed.
2. Select the checkbox adjacent to the Business Processor name and click .
In Business Processor Selector window, you can:
Search for a Business Processor by specifying the nearest keyword and clicking .
Click to define a new Business Processor. For more information see Create Business
Processor.
Click Ascending or Descending to sort the selected components in the ascending or
descending order.
Click to remove the selected Business Processors from the Selected Business Processors
pane.
3. Click OK. The selected Business Processors are listed under the Combination Mapper pane
along with the Source and Filer definition details. 0.
After selecting Business Processor(s) in the Combination Mapper pane, you can set the Default
Target member, specify Parameters, and exclude child nodes for the Rule definition, if required.
You can set the selected Target member as default by clicking on the header bar of the
required Business Processor and selecting the Default Member checkbox.
When a Target member is selected as default, all the unmapped Source member combinations
for that Target object will be logically mapped to the default member and the corresponding
target appears disabled. Run time parameters cannot be applied for such defaulted target BP’s.
However, the logical mappings will not overwrite the physical mapping.
You can specify parameters for the selected Business Processor. Select the checkbox(s)
adjacent to the required Business Processor and click adjacent to the checkbox selected.
The Parameters pop-up is displayed.
For Classification Rules and Computation Rules with non-parameterized BP, the
Parameters pane is displayed as shown here:
Enter the required note in the text field and click OK.
For a Computation Rule with parameterized BP, the Parameters pop-up is displayed as
given.
Enter the required note in the text field. The Parameter Default Value is fetched from
the Business Processor definition and the Assign Values can be entered manually
that is considered during Rule execution at Runtime. You can also clear the Assign
Value field by clicking Clear Values. Click OK.
You can exclude child node(s) in the Combination Mapper pane, if they are not
required in the Rule execution. Click (Exclude). The Rule Exclude window is
displayed.
NOTE The exclude icon is available only for the combinations with
physical mappings. When a default member is removed from
the target member, all logical mappings are removed retaining
only physical mappings.
The Rule exclude window displays only the child nodes associated with a Parent node. Ensure
that the selected parent has associated child nodes and is not the default member in the target.
Select the checkbox adjacent to Rule code that you want to exclude and click OK.
Once all the necessary details are entered, click Save. The Rule definition is saved with the
provided details and is displayed in the Rule window.
Note that the default version of a new Rule definition created by an Authorizer is 0 and the one
created by non-authorizer is -1. For more details on Versioning, see Rule Definition Versioning.
The Audit Trail section of the Rule Definition (New Mode) window displays metadata
information about the Rule definition created. The User Comments section facilitates you to add
or update additional information as comments.
2. Click Edit in the List toolbar. The Edit button is disabled if you have selected multiple rules.
The Rule Definition (Edit Mode) window is displayed.
3. Edit the rule details as required. For more information, see Create Rule.
4. Click Save to save the changes. 0.
NOTE • The rule with version 0 is the latest one and it can have
many versions say 1 to n, where 1 is the oldest rule and
n is the next to the latest.
• A rule with version -1 is always in an inactive state.
You can view all the versions of a particular rule by providing the rule’s name or code and clicking
Search in the Search and Filter grid. (Ensure the Version field is cleared since it is auto populated with
0).
2. Click Copy in the List toolbar. The Rule Definition (Copy Mode) window is displayed. The
Copy button is disabled if you have selected multiple Rules. 0.
In the Rule Definition (Copy Mode) window, you can:
Create a new Rule definition with existing variables. Specify a new Rule Code and Folder.
Click Save.
Create a new Rule definition by updating the required variables. Specify a new Rule Code,
Folder, and update other required details. For more information, see Create Rule. Click
Save.
The new Rule definition details are displayed in the Rule window. By default, version “0” is set if
you have authorization rights, else the version is set to “-1”.
To approve the selected rule definitions, click Authorize and select Approve.
To reject the selected rule definitions, click Authorize and select Reject.
A rule is made available for use only after the approval. For a rejected definition, a comment
with the rejection details will be added.
2. Click Export button in the toolbar and select PDF. The Export dialog is displayed.
The Export dialog displays the Export Format, Definition Type, and the names of the Selected
Definitions.
3. Click Export. The process is initiated and is displayed in a pop-up specific to the current
download. Once the PDF is generated, you can open/save the file from the File Download dialog
box.
You can either save the file on the local machine or view the file contents in a PDF viewer. The
downloaded PDF displays all the details such as Linked to, Properties, Master information, Audit
Trail, List, Mapping Details, and Comments of all the Rule definitions selected.
V_APP_ID Application ID
NOTE The hierarchy definition requires a resave post this change. Use
the Business Hierarchy Edit operation to do this.
4. Enable the flag to adjust the Rule query to pick up those dimensional data where the MISDATE
of execution is in between the start date and end date of the records present in the Dimension
table.
8.3 Process
A set of rules collectively form a Process. A process definition is represented as a Process Tree. The
Process option in the Rules Run Framework provides a framework that facilitates the definition and
maintenance of a process. By defining a process, you can logically group a collection of rules that
pertain to a functional process.
You can define a process with the existing metadata objects using a hierarchical structure, which
facilitates the construction of a process tree. A Process tree can have many levels and one or many
nodes within each level. Sub-processes are defined at level members and process hierarchy members
form the leaf members of the tree. See Process Hierarchy Members for more information.
Note the following:
• Precedence defined to each process determines the Process Initiation Sequence.
• If precedence is defined, the process execution (along with the associated Rules) happens based
on the precedence defined to each component.
• If no precedence is defined, all the processes within the process tree are initiated together in its
natural hierarchical sequence.
Consider the following illustration:
• If natural precedence is defined to the sub process SP1, process execution is triggered in the
sequence Rule 1 > SP1a > Rule 2 > SP1.
• If no precedence is defined, all the sub processes SP1, SP2, Rule 4, and Rule 5 are executed in
parallel.
Further, the business may require simulating conditions under different business scenarios and
evaluate the resultant calculations with respect to the baseline calculation. Such simulations are done
through the construction of Processes and Process trees. Underlying metadata objects such as Rules,
T2T Definitions, Processes, and Database Stored Procedures drive the process functionality.
Concurrent Rule Execution
You can define a process to combine different computation/ classification rules for concurrent
execution by marking the process or sub process as executable.
Conditions for execution
• Rules defined on different datasets cannot be combined together
• The executable process or sub process should update the same FACT table
• Aggregation rules will be merged as separate rules for execution
The Roles mapped for Process module are Process Access, Process Advanced, Process Authorize,
Process Read Only, Process Write and Process Phantom. Based on the roles mapped to your user
group, you can access various screens in the Process module. For more information on functions
mapped to these roles, see Appendix A.
The Process window displays the processes created in the current Information Domain with the
metadata details such as Code, Name, Folder, Version, and Active. For more information on how
object access is restricted, see Object Security.
You can search for specific Processes based on Code, Name, Folder, Version, or Active. The Folder
drop-down list displays all Public folders, shared folders to which your user group is mapped and
Private folders for which you are the owner. The Pagination option helps you to manage the view of
existing Processes within the system.
2. Click adjacent to the Folder field in the Linked to grid. The Folder Selector window is
displayed. The folders to which your user group is mapped are displayed.
a. Select the checkbox adjacent to the required folder. Click OK.
b. Click New from the List toolbar to create a new folder/segment. For more information,
see Segment Maintenance.
Table 74: Fields in the Master Information pane and their Descriptions
Enter a valid code for the process. Ensure that the code is alphanumeric
Code with a maximum of 30 characters in length and there are no special
characters except underscore “_”.
Enter a valid name for the process. Ensure that the process name is
Name alphanumeric and does not contain any of the following special
characters: #, %, &, +, ", and ~.
By default, the version field is displayed as <<NA>> for the new process
being created. Once the process definition is saved, an appropriate
Version
version is assigned as either -1 or 0 depending on the authorization
permissions. For more information, see Process Definition Versioning.
By default, the Active field is displayed as <<NA>> for the new process
being created. Once the process definition is saved, the status is set to
Active
“Yes” if you are an authorizer or No if the created process needs to be
authorized by an authorizer.
Select the process type based on which you would like to create the rule
Type
from the drop-down list.
Select the checkbox if you want to bunch rule executions for concurrency.
Executable If you are selecting the checkbox, you can add only Computation or
Classification Rules as Components. For more information, see
Concurrent Rule Execution.
Route Execution to Select the checkbox if you want to route the execution of this Process
High Precedence Node definition to the high precedence node set up in the AM server.
4. Click Properties in the Master Information grid. The Properties window is displayed.
You can edit the following tabulated details in the Properties window.
By default, this field displays the last change done to the process
Last Operation Type definition. While creating a process, the field displays the operation type
as Created.
5. Click OK. The properties are saved for the current process definition.
2. Enter the Subprocess Code. You cannot enter any special characters except underscore “_”.
3. Select the Executable checkbox to club the rules for concurrent execution. Executable sub
process can have only Classification/ Computation Rules.
4. Click OK.
The sub process is listed under the root process as a branch.
NOTE You can further create sub processes for the existing processes
or the base process by selecting the process and following the
aforementioned procedure; however, an executable sub
process cannot have a sub process within it.
You can select (pagination) icon to view more options under the available components. For
more information, see Process Hierarchy Members.
3. Select a Process Component and click to move the component to the Tasks In <Process
Name> pane.
In Component Selector window you can also:
Search for a component by specifying the nearest keyword in the Search field and clicking
button.
Click Ascending or Descending to sort the selected components in Ascending or
Descending alphabetical order.
Click to remove the selected components from the Tasks In <Process Name> pane.
4. Click OK. The components are listed under the selected process.
NOTE You can merge only rules which are part of the same dataset.
3. Specify the sub process code. The Executable checkbox will be selected. You cannot modify it.
4. Click Ok. The merged rules will be placed under the new sub process.
3. Select Auto Map to override the predefined precedence and to set predecessor tasks as
precedence.
4. To manually select predecessor tasks for a task:
Select a task from Tasks In <Process Name> drop-down list. The other tasks are listed in
the Available Precedence pane.
NOTE You cannot select tasks as predecessor tasks if they have cyclic
dependencies with the selected task.
4. Select the process/ sub process to which you want to move the task.
5. Click OK. The window is refreshed and the task is displayed under the selected process.
2. Click Edit button in the List toolbar. The Edit button is disabled if you have selected multiple
Processes. The Process Definition (Edit Mode) window is displayed.
3. Modify the process details as required. For more information, see Create Process.
4. Click Save to save the changes.
NOTE • The process with version 0 is the latest one and it can
have many versions say 1 to n, where 1 is the oldest
process and n is the next to the latest.
• A rule with version -1 is always in an Inactive state.
You can view all the versions of a particular process by providing the process’s name or code and
clicking Search in the Search and Filter grid. (Ensure the Version field is cleared since it is auto
populated with 0).
2. Click Copy button in the List toolbar to copy a selected process definition. The Process
Definition (Copy Mode) window is displayed. The Copy button is disabled if you have selected
multiple processes.
In the Process Definition (Copy Mode) window you can:
Create a new process definition with existing variables. Specify a new Process Code and
Folder. Click Save.
Create a new process definition by updating the required variables. Specify a new Process
Code, Folder, and update other required details. For more information, see Create Process.
Click Save.
The new process definition details are displayed in the Process window. By default, version 0 is
set if you have authorization rights, else the version is set to -1.
To approve the selected process definitions, click Authorize and click Approve
button.
To reject the selected process definitions, click Authorize and click Reject button.
A process is made available for use only after the approval. For a rejected definition a comment
with the rejection details will be added.
2. Click Export in the toolbar and click the PDF. A confirmation message is displayed.
3. Click Yes to confirm. The Export Options window is displayed.
The Export Options window displays the Export Format, Definition Type, the names of the
Selected Definitions, and the Trace Options.
4. To select the Trace Options:
Select the checkbox(s) adjacent to the available options.
Click . The selected options are displayed in the Selected Trace Options pane. You can
also select a trace option and click to deselect it from the Selected Trace Options pane.
5. Click Export. The process is initiated and is displayed in a pop-up specific to the current
download. Once the PDF file is generated, you can open/ save the file from the File Download
window.
You can either save the file on the local machine or view the file contents in a PDF viewer. The
downloaded PDF displays all the details such as Linked to, Properties, Master info, Audit Trail, List,
Mapping Details, and Comments of all the Process definitions selected.
Process or Run and click Show Details to view the definition details.
8.4 Run
The Run feature in the Rules Run Framework helps you to combine various components and/or
processes together and execute them with different underlying approaches. Further, run conditions
and/or job conditions can be specified while defining a run.
Two types of runs can be defined namely Base Run and Simulation Run.
Base Run allows you to combine different rules and processes together as jobs and apply run
conditions and job conditions.
Simulation Run allows you to compare the resultant performance/ calculations with respect to the
baseline runs by replacing an existing job with a simulation job (a job can be a rule or a process). This
comparison provides useful insights into the effect of anticipated changes to the business.
Instance Run allows you to combine Base Runs and Simulation Runs in addition to other components
from multiple information domains as Jobs. This eliminates the need for having different Run
definitions if some Jobs are available in Hive Information Domain and some are present in RDBMS
Information Domain.
The Roles mapped for Run module are Run Access, Run Advanced, Run Authorize, Run Read Only,
Run Write and Run Phantom. Based on the roles mapped to your user group, you can access various
screens in the Run module. For more information on functions mapped to these roles, see Appendix
A.
The Run window displays the runs created in the current Information Domain with the metadata
details such as Code, Name, Type, Folder, Version, and Active status. For more information on how
object access is restricted, see Object Security.
You can search for specific runs based on Code, Name, Folder, Version, Active status, or Type. The
Folder drop-down list displays all Public folders, shared folders to which your user group is mapped,
and Private folders for which you are the owner. The Pagination option helps you to manage the view
of existing runs within the system.
Table 76: Condition Types in the Create Run and their Descriptions
Run Condition A Run Condition is defined as a filter and all hierarchies (defined in the current
information domain) are available for selection.
You can select up to 9 run conditions.
A Run condition is defined for all Jobs. But it will be applied to a Job only if the
underlying target/destination entities of both Job and Hierarchy are common.
Job Condition A Job Condition is a further level of filter that can be applied at the component
level. This is achieved through a mapping process by which you can apply a
Job Condition to the required job.
You can select only one Job Condition and the hierarchy that you have already
selected as a run condition cannot be selected as the Job Condition again.
2. Click adjacent to the Folder field in the Linked to grid. The Folder Selector window is
displayed. The folders to which your user group is mapped are displayed.
a. Select the checkbox adjacent to the required folder. Click OK.
b. Click New from the List toolbar to create a new folder/segment. For more information,
see Segment Maintenance.
Table 77: Field Names in the Master information pane and their Descriptions
Enter a valid code for the run. Ensure that the code value specified is a
maximum of 30 characters in length and it does not contain any
special characters except “_”.
Code The code is unique and case sensitive. It is used to identify a run
definition during execution.
Note: You cannot use the same code of a rule which has been deleted
from the UI.
Enter a valid name for the run. Ensure that Run Name is alphanumeric
and does not contain any of the following special characters: #, %, &,
Name +, ", and ~.
Note that the name is not required to be unique.
By default, the version field is displayed as <<NA>> for the new run
being created. Once the run definition is saved, an appropriate version
Version
is assigned as either -1 or 0 depending on the authorization
permissions. For more information, see Run Definition Versioning.
By default, the Active field is displayed as <<NA>> for the new run
being created. Once the run definition is saved, the status becomes
Active
Yes if you are an authorizer or No if the created Run needs to be
authorized by an authorizer.
Select the type of the run from the drop-down list. The available types
Type
are Base Run, Simulation Run, and Instance Run.
Route Execution to High Select the checkbox if you want to route the execution of this Process
Precedence Node definition to the high precedence node set up in the AM server.
4. Click Properties in the Master information grid. The Properties window is displayed.
You can edit the following tabulated details in the Properties window:
By default, this field displays the last change done to the run
Last operation Type definition. While creating a run, the field displays the operation type
as Created.
5. Click OK. The properties are saved for the current Run definition.
To select a condition for a run in the Run Definition (New Mode) window:
1. Click Selector from the List toolbar and select Run Condition. The Filter Selector window
is displayed.
You can select (pagination) icon to view more options under the available components.The
List pane displays Hierarchies or Filters based on the option selected in the drop-down list in
the Search in pane. The options are:
Hierarchy- Displays all Business Hierarchies defined in the information domain.
Filter-Data Element- Displays all Data Element Filters defined in the information domain.
Filter-Hierarchy - Displays all Hierarchy Filters defined in the information domain.
Filter-Group - Displays all Group Filters defined in the information domain.
Filter-Attribute - Displays all Attribute Filters defined in the information domain.
2. Select the checkbox adjacent to the Hierarchy or Filter that you want to select as the Run
condition and click .
To know about the operations you can do in this window, see Filter Selector Hierarchy_Selector
window.
3. Click OK. The selected Hierarchies are listed in the Run Definition (New Mode) window.
4. If the selected Run condition is a Parent Child hierarchy, the Use Descendants checkbox is
displayed. If the checkbox is selected for a hierarchy, the descendants will be automatically
applied and need not be selected in node selection from the Hierarchy Browser window.
1. Click Selector from the List toolbar and select Job. The Component Selector window is
displayed.
On the List pane, you can click button to expand the members and view the job
components. For more information, see Process Hierarchy Members.
2. Select a job component and click to move the component to the Tasks pane.
NOTE You cannot select different Jobs with same unique code in a
run definition. In such cases, the Jobs should be added to a
process and the process should be added to the run definition.
Search for a component by specifying the nearest keyword and clicking . It may not
display search results if the branch of that component has not been expanded.
Click Ascending or Descending button to sort the selected components in ascending or
descending alphabetical order.
1. Click Selector from the List toolbar and select Job. The Component Selector window is
displayed.
For Instance Run, you can add Base Run and Simulation Run as Jobs.
2. Select the information domain in which the job component you want to add is present, from the
Infodom drop-down list. By default, the selected Application’s Information Domain is displayed.
The drop-down list displays all information domains to which your user group is mapped except
sandbox information domains.
3. Select a job component and click to move the component to the Tasks pane.
If you want to add a job component from another information domain, select the required
information domain from the drop-down list. The Component list refreshes and you can
add the required Job components.
For more information see Job Selector.
4. Click OK. The components are listed under the List pane in the Run Definition window.
1. Click Selector from the List toolbar and select Job Condition. The Filter Selector window
is displayed.
2. Select the checkbox adjacent to the hierarchy that you want to select as the Job condition and
click .
To know about the operations you can do in this window, see Filter Selector window.
NOTE Ensure that you have selected only one Job Condition and the
same hierarchy is not selected as both Run and Job conditions.
3. Click OK.
From the List grid in the Run Definition (New Mode) window, you can also:
Click Move to change a selected run condition to job condition and conversely. For
Instance Run, the Move is disabled.
Click Show Details to view the metadata information of the selected member.
If the selected Job condition is a Parent Child hierarchy, the Use Descendants checkbox is
displayed. If the checkbox is selected for a hierarchy, the descendants will be automatically
applied and need not be selected in node selection from the Hierarchy Browser window.
Once all the necessary information in the first window of the Run Definition (New Mode) is populated,
click Next to navigate to the concurrent procedures of defining a Rule.
The second window of Run Definition (New Mode) window displays all the information you have
provided in the Linked to and Master information grids. You can view the selected filters in the Run
Condition grid and selected jobs along with the job condition in the Detail Information grid in case of
Base Run and Simulation Run. For Instance Run, only jobs will be displayed.
Expand a job which is a process, then the Object, Parent Object, Precedence and Type columns are
populated.
NOTE This option will be available only if you have selected Hierarchy
as the Run condition.
2. Select a member/node and click to select the same. Click to select the member as Self,
Self & Descendants, Self & Children, Parent, Siblings, Children, Descendants, or Last
Descendants. For more information, see Hierarchical Member Selection Modes.
In the Hierarchy Browser window you can also:
Click to focus only on the selected branch. The Available Values pane shows the
Click to display member's numeric codes on the right. The icon changes to .
Click to display member's numeric codes on the left. The icon changes to .
Click to show only member names. This is the default view. The icon changes to .
Click to display member's alphanumeric codes on the right. The icon changes to .
Click to display member's alphanumeric codes on the left. The icon changes to .
Click to display only member names. This is the default view. The icon changes to .
Select a member and click or to re-arrange the members in the Selected Values
pane.
Select a member and click to move it to the top or click to move it to the
bottom.
Click to launch the Search panel. Here you can search based on Dimension Member
Numeric Code, Dimension Member Name or Dimension Member Alphanumeric Code.
You can also search in the grid based on member name using the Search field.
3. Click corresponding to the run condition to view the SQL query. The SQL query is formed
based on the hierarchical member selection mode. The Preview SQL Query window is displayed
with the resultant SQL equivalent of the run condition.
The Detail Information grid displays the jobs and job condition defined for the run definition.
1. Select the checkbox adjacent to the Run Code whose details are to be viewed.
2. Click Edit in the List toolbar. Edit button is disabled if you have selected multiple Runs. The
Run Definition (Edit Mode) window is displayed.
3. Edit the Run details as required. For more information, see Create Run.
4. Click Save to save the changes.
NOTE • The run with version 0 is the latest one and it can have
many versions say 1 to n, where 1 is the oldest Run and
n is the next to latest.
• A run with version -1 will always be in an Inactive state.
You can view all the versions of a particular rule by providing the run’s name or code and clicking
Search in the Search and Filter grid. (Ensure the Version field is cleared since it is auto populated with
0).
2. Click Copy in the List toolbar to copy a selected Run definition. The Run Definition (Copy
Mode) window is displayed. Copy button is disabled if you have selected multiple Runs.
In the Run Definition (Copy Mode) window you can:
Create a new Run definition with existing variables. Specify a new Run Code and Folder.
Click Save.
Create a new Run definition by updating the required variables. Specify a new Run Code,
Folder, and update other required details. For more information, see Create Run. Click
Save.
The new Run definition details are displayed in the Run window. By default, version 0 is set if you have
authorization rights, else the version is set to -1.
To approve the selected run definitions, click Authorize and select Approve.
To reject the selected run definitions, click Authorize and select Reject.
A run is made available for use only after the approval. For a rejected definition a comment with the
rejection details will be added.
2. Click Export button in the List toolbar and click the PDF button in the popup. The
Export dialog is displayed.
The Export window displays the Export Format, Definition Type, the names of the Selected
Definitions, and the Trace Options.
Select the checkbox adjacent to Rule or Process if you want to export only the rule details or
Process details respectively. If you do not select any checkbox, all details of the selected run
definitions will be exported.
Click . The selected options are displayed in the Selected Trace Options pane. You can
also select a trace option and click to deselect it from the Selected Trace Options pane.
3. Click Export. The process is initiated and is displayed in a pop-up specific to the current
download. Once the PDF is generated, you can open/save the file from the File Download
dialog.
You can either save the file on the local machine or view the file contents in a PDF viewer. The
downloaded PDF displays all the details such as Linked to, Properties, Master info, Audit Trail, List, and
Comments of all the Run definitions selected.
1. Select the checkbox adjacent to the Run Code which you want to execute and click Fire
Run in the List toolbar. The Fire Run window is displayed.
2. Enter the field details as tabulated below:
The following table describes the fields in the Fire Run window.
Table 79: Fields in the Fire Run window and their descriptions
Select the request type either as Single or as Multiple from the drop-
down list.
Single Request - You need to provide the MIS Date during Batch
Request Type
execution from the Operations module.
Multiple Request - You can run the batch with the same MIS date
multiple times from the Operations module.
Select the Batch either as Create or as Create & Execute from the
drop-down list
Create- The batch will be created and needs to be executed from the
Batch
Operations module.
Create & Execute- The batch will be created and executed. You can
monitor it from the Operations module.
Select Yes and provide the Duration in seconds after which the run
Wait definition should be executed.
Select No to execute it immediately.
3. Click OK. The details are saved and the run definition is executed as per the Fire Run details. For
information on runtime parameters supported during run execution, see Passing Runtime
Parameters section.
The Manage Run Execution window displays the Run Execution requests created in the current
Information Domain with the metadata details such as Run name, Run Execution Description, Run
Execution ID, Type, MIS Date, and Request Status. If Object Security is implemented, see the Object
Security section to understand the behavior.
You can also search for specific Runs based on Run Name, Run Execution Description, MIS Date, Run
Execution ID, Type, or Request Status. The Pagination option helps you to manage the view of existing
Rules within the system.
2. Click adjacent to the Run field. The Run Selector window is displayed.
b. Search for a Run definition by specifying any keyword and clicking button.
c. Select the checkbox adjacent to the Run definition you want to select and click Ok.
The selected Run is displayed in the Run field, along with the Run ID.
Table 80: Fields in the Master Information and Execution Details pane and their Descriptions
Run Execution ID The default ID of a newly created Run Execution is <<New >>
Enter a valid Run Execution Code. Ensure that the Run Execution Code
Run Execution Code specified is of maximum 30 characters in length and does not contain
any special characters except “_”.
Enter the Name of the Run Execution. Ensure that Run Execution Name is
Run Execution Name alphanumeric and does not contain any of the following special
characters: #, %, &, +, ", ~, and ‘.
5. Click Save. For information on runtime parameters supported during Manage Run Execution,
see Passing Runtime Parameters section. The Run Execution is saved and a confirmation dialog
appears.
The Audit Trail section at the bottom of the Manage Run Definition (New Mode) window displays
metadata information about the Manage Run definition created. The User Comments section
facilitates you to add or update additional information as comments.
2. Click Edit in the List toolbar. Edit button is disabled if you have selected multiple Manage
Run Definitions.
The Manage Run Definition (Edit Mode) window is displayed.
3. Edit the Manage Run definition details as required.
For more information, see Manage Run Definition.
You can select the Request Status as Open, Closed, To be Deleted, or Final depending on the
current status of the definition:
Status Open creates/updates a Manage Run definition.
Status Closed creates a Manage Run definition along with a Batch.
Status To be Deleted indicates the Manage Run definition is marked for deletion.
Status Final indicates the Manage Run definition is successfully executed with expected
results.
The Execution Status field displays the current execution status of a triggered Run as Success,
Failure, or Ongoing and <<NA>> for a non-executed Run.
4. Click Save to save the changes.
8.6 Utilities
This section consists of information related to the utilities available in the Rules Run Framework
module of OFSAAI.
NOTE Before you begin, ensure that you have registered all the
required components within the Run Rule Framework (RRF).
For detailed information, see OFSAAI Administration Guide.
The Component Registration window displays the current components in the left pane and the field
values of the selected component in the right pane. The parameters described for a component in this
window are Component ID, ICC Component ID, Image Name, Parent ID, Class Path, and Tree Order.
The Audit Trail section at the bottom of the Component Registration window displays metadata
information about the Component selected/created.
1. From the Component Registration window, click New. The fields in the right pane of the
Component Registration window are reset.
2. Enter the details as tabulated below.
The following tables describes the fields in the Component Registration window.
Table 81: Fields in the Component Registration window and their Descriptions
Select the Parent ID from the drop-down list. The abbreviated form of
Parent ID component IDs are displayed in the list. You can check the Available
Components pane for the full forms of the abbreviations used.
ICC Component ID Select the ICC Component ID from the drop-down list.
Image Name Key in the image name which is allocated for the component.
3. Click Save. The fields are validated and the component is saved.
1. Select the Component from the left pane tree structure, whose details are to be updated.
2. Click Edit button. The fields of the selected component are editable.
3. Edit the Component details as required. For more information, see Create Component.
4. Click Save to save the changes.
1. Select the Component whose details are to be removed and click Remove.
2. Click OK in the warning dialog to confirm deletion.
The Component Registration window confirms the deletion of the component definition.
8.7 References
This section of the document consists of information related to intermediate actions that are required
while completing a task. The procedures are common to all the sections and are referenced wherever
required.
To calculate this, a DT is created using RRF, where necessary expressions are defined. The instructions
to multiply the values of all these three columns are encapsulated in the Rule.
8.7.2.3 LRM - Updating Revised Maturity Date Surrogate Key With Maturity Date
Surrogate Key
1. This Rule is created to update the Revised Maturity Date for the assets and liability accounts
when the revised maturity date is absent.
8.7.2.5 LRM - Residual Maturity Less Than Liquidity Horizon Flag Update
1. This Rule is created to update the accounts as ‘Y’, where the Residual Maturity Date falls within
the liquidity horizon.
2. The source hierarchy related to Run is considered.
3. The destination Measure is a flag which indicates if the Residual Maturity is less than the
liquidity horizon, and is defined as the target in the Rule.
4. The business process containing the flag related to the Residual Maturity that is less than the
liquidity horizon is mapped to the destination Measure.
5. The relevant dataset LRM - Residual Maturity Less Than Liquidity Horizon Flag Update is
created and updated to fetch the relevant data and match the Business Processor, hierarchies,
Measures, and tables used in processing this Rule.
After these Rules are created, they are added to the process ‘LRM – BIS – Determining Revised
Maturity’, in the order mentioned above. This process is stitched to a Run which is used to process the
LCR calculation related to the BIS regularizations in LRM.
Table 82: Components in the Process Hierarchy Members and their Descriptions
Component Description
Data Extraction Rules Display all the Extract definitions defined through OFSAAI Data Management
Tools.
Load Data Rules Display the following two sub types of definitions:
File Loading Rules display the entire File to Table definitions defined
through OFSAAI Data Management Tools.
Insertion Rules (Type1 Rules) display all the Table to Table definitions
defined through OFSAAI Data Management Tools.
Processes Display all the existing processes defined through Process Framework which
have Active status as “Yes” and Version “0”.
Essbase Cubes Display all the Essbase cubes defined for the selected Information Domain in
OFSAAI Data Model Management.
Note: The cubes under the segment to which the user is mapped only will be
displayed.
Model Display all the existing model definitions defined in the Modeling framework
windows.
Stress Testing Display all the existing stress testing definitions defined in the Variable Shock
Library, Scenario Management, and Stress Definition windows.
Data Quality Displays all data quality groups defined from the OFSAAI Data quality
Framework.
The DQ Rule framework is registered with RRF. While passing additional
parameters during RRF execution, the additional parameters are passed
differently (when compared to DQGroup execution). For example, if the
additional parameters to be passed are :
$REGION_CODE#V#US;$CREATION_DATE#D#07/06/1983;$ACCOUNT_BA
L#N#10000.50, then they are passed as:
"REGION_CODE","V","US","CREATION_DATE","D","07/06/1983",
"ACCOUNT_BAL","N","100 00.50". In case the user wants to input threshold
percentage (for example,: 50%), then the parameter string passed is as
follows:
"50","REGION_CODE","V","US","CREATION_DATE","D","07/06/1983","ACCO
UNT_BAL","N", "10000.50". In the absence of the threshold parameter, it is
assumed to be 100%, by default.
The parameters needed to execute all the listed components are explained in the Seeded Component
Parameters section.
You can also click to select all the members to the Selected Values pane. Click to
deselect a selected member from the Selected Values pane or click to deselect all the members.
Rule definition with Pre-Built Flag set to “Y” > Build the Rule query.
Creating Rule:
Rule definition with Pre-Built Flag set to “N” > Do not build the Rule query during
Rule Save.
Pre-Built Flag set to “Y” > Retrieve the rule query from the appropriate table and
Executing Rule:
execute.
Pre-Built Flag set to “N” > Build the Rule query by referencing the related
metadata tables and then execute.
For example, consider a scenario where Rule 1 (RWA calculation), using a Dataset DS1 is to be
executed. If the Pre-Built Flag condition is set to “N”, then the metadata details of From Clause and
Filter Clause of DS1 are searched through the database to form the query. Whereas, when the Pre-
Built Flag condition is set to “Y”, then the From Clause and Filter Clause details are retrieved from the
appropriate table to form the query and thereby triggered for execution.
Like Dataset, pre-compiled rules also exist for other Business Metadata objects such as Measures,
Business Processors, Hierarchies, and so on.
Note the following:
When you are sure that the Rule definition is not modified in a specific environment (production), you
can set the flag for all Rule definitions as “Y”. This would in turn help in performance improvement
during Rule execution. However, if the Rule is migrated to a different environment and if there is a
change in the query, change the status back to “N” and also may need to resave the Rule, since there
could be a change in metadata.
IP Address (System Refers to the IP Address of the server where the OFSAAI
Defined) Database components for the particular information
domain have been installed. This IP Address also specifies
the location (server hostname / IP Address) where the
component is to be executed.
Optional Parameters It is a set of different parameters like Run ID, Process ID, Exe
(System Defined) ID, and Run Surrogate Key. For example,
$RUNID=123,$PHID=234,$EXEID=345,$RUNSK=456
Operation (User It is a drop-down list with the following optional values - ALL
Defined) "ALL", "GENDATAFILES", and "GENPRNFILES" to generate
Data files or PRN files or both, during Cube build.
IP Address (System Defined) Refers to the IP Address of the server where the
OFSAAI Database components for the particular
information domain have been installed. This IP
Address also specifies the location (server
hostname / IP Address) where the component is to
be executed.
Operation (User Defined) It is a drop-down list with the following optional ALL
values - "ALL", "BUILDDB", "TUNEDB",
"PROCESSDB", "DLRU", "ROLLUP", "VALIDATE",
"DELDB", "OPTSTORE"
IP Address (System Defined) Refers to the IP Address of the server where the
OFSAAI Database components for the particular
information domain have been installed. This IP
Address also specifies the location (server
hostname / IP Address) where the component is to
be executed.
Source Name (System Defined) The scope of T2F is limited to the Source of the
tables and this gives the name of the source.
IP Address (System Defined) Refers to the IP Address of the server where the
OFSAAI Database components for the particular
information domain have been installed. This IP
Address also specifies the location (server
hostname / IP Address) where the component is to
be executed.
Source Name (System Defined) The scope of this component is limited to the
source and it gives the name of the source file.
Load Mode (System Defined) Additional parameter to differentiate between F2T File To Table
and T2T
Data File Name (User Defined) Name of the source file. If not specified, the source
name provided in the definition will be used.
IP Address (System Defined) Refers to the IP Address of the server where the OFSAAI
Database components for the particular information
domain have been installed. This IP Address also specifies
the location (server hostname / IP Address) where the
component is to be executed.
Source Name (System Defined) The scope of this component is limited to the source and
it gives the name of the source table.
Load Mode (System Defined) Additional parameter to differentiate between F2T and Table To
T2T Table
Default Value (System Defined) It is a set of different parameters like Run ID, Process ID,
Exe ID, and Run surrogate key. For example,
$RUNID=123,$PHID=234,$EXEID=345,$RUNSK=456
Data File Name (User Defined) Not Applicable since this parameter is only used for F2T,
not T2T
IP Address (System Defined) Refers to the IP Address of the server where the OFSAAI
Database components for the particular information
domain have been installed. This IP Address also specifies
the location (server hostname / IP Address) where the
component is to be executed.
Operation (System Defined) Refers to the operation to be performed. You can click the ALL
drop-down list to select additional parameters to direct the
engine behavior.
Optional Parameters (System It is a set of different parameters like Run ID, Process ID,
Defined) Exe ID, and Run Surrogate Key. For example,
$RUNID=123,$PHID=234,$EXEID=345,$RUNSK=456
IP Address (System Defined) Refers to the IP Address of the server where the OFSAAI
Database components for the particular information
domain have been installed. This IP Address also specifies
the location (server hostname / IP Address) where the
component is to be executed.
Operation (System Defined) Refers to the operation to be performed. You can click the ALL
drop-down list to select additional parameters to direct the
engine behavior.
Optional Parameters (System It is a set of different parameters like Run ID, Process ID,
Defined) Exe ID, and run surrogate key. For example,
$RUNID=123,$PHID=234,$EXEID=345,$RUNSK=456
IP Address (System Defined) Refers to the IP Address of the server where the OFSAAI
Database components for the particular information domain
have been installed. This IP Address also specifies the
location (server hostname / IP Address) where the
component is to be executed.
Operation (System Defined) Refers to the operation to be performed. You can click the ALL
drop-down list to select additional parameters to direct the
engine behavior.
Optional Parameters (System It is a set of different parameters like Run ID, Process ID, Exe
Defined) ID, and run surrogate key. For example,
$RUNID=123,$PHID=234,$EXEID=345,$RUNSK=456
8.7.6.9 Process
The process component does not have any seeded parameters and is the same defined in the Process
window.
Batch Parameter (System This determines if the implicit system parameters like batch
Y
Defined) ID, MIS date, and so on are to be passed or not.
9 Operations
Operations refers to administration and processing of business data to create the highest level of
efficiency within the system and to derive results based on a specified rule. Operations framework
within the Infrastructure system facilitates you (system administrator) to:
• Configure and operate the business processes effectively.
• Maintain the Operator Console by Defining and Executing Batches through the Operations
menu.
• Monitor the Batches scheduled for execution.
The roles mapped for Operations module are Batch Access, Batch Advanced, Batch Read Only, and
Batch Write.
If you require users to access only selected modules, enable the access to specific-module functions
and do not enable access to the Operator Console. Enabling access to the Operator Console gives
users access to all the Batch modules.
For example, if a user requires to access only the Batch Monitor module, map the user to Batch
Monitor Link function and ensure the user does not have access to the Operator Console function.
For more details on roles and functions, see Appendix A.
The operation section discusses the following sections:
• Batch Maintenance
• Batch Execution
• Batch Scheduler
• Batch Monitor
• Processing Report
• Batch Cancellation
• View Log
Table 101: Fields in the Batch Maintenance Add window and their Descriptions
Field Description
The Batch Name is auto generated by the system. You can edit to specify a
Batch name based on the following conditions:
The Batch Name should be unique across the Information Domain.
Batch Name The Batch Name must be alphanumeric and should not start with a
number.
The Batch Name should not exceed 41 characters in length.
The Batch Name should not contain any special characters except “_”.
Field Description
Note: The special characters that are not supported are as follows:
Character Description
----------- -------------
! Exclamation point
" Double quotes
` Back quote
* Asterisk
+ Plus sign
; Semicolon
? Question mark
^ Carat
| Pipe character
~ Tilde character
' Apostrophe
\ Backslash
/ Forward slash
@ At sign
Field Description
Batch ID (If duplicate It is mandatory to specify the Batch ID if Duplicate Batch option is selected.
Batch is selected) Select the required Batch ID from the list.
3. Click Save to save the Batch definition details. The new Batch definition details are displayed in
the Batch Name section of Batch Maintenance window with the specified Batch ID.
In the Batch Name tool bar of Batch Maintenance window, you can select the Batch ID and do
the following:
Click Edit button to change the status of the Batch as Non Editable (NE).
By default the new Batch created will have the status set as Editable (E).
Table 102: Fields in the Task Definition Add window and their Descriptions
Field Description
Field Description
3. Click Save to save the task definition details. The new task details are displayed in the Task
Details of the Batch Maintenance window with the Task ID.
In the Task Details tool bar of Batch Maintenance window you can select the Task ID and do the
following:
Click Add button to add another Task.
1. Click button under the Precedence column of the task for which you want to add
precedence task.
The Task Precedence Mapping browser is displayed.
NOTE Task Precedence option is disabled if a batch has only one task
associated.
Select the required Task from the Task List and click . You can press Ctrl key for multiple
selections.
To remove a Task, select the task from Select Tasks pane and click .
The Batch Execution window displays the list of only those Batches which have at least one task
associated, with the other details such as Batch ID and Batch Description. When you select a Batch ID
in the list, the Task Details sections displays all the defined Tasks associated with the Batch.
The Batch Details section in the Batch Execution window lists the Batches depending on the Batch
Mode selected.
• The Run mode displays the Batch definitions which are newly defined and which have been
scheduled for execution.
• The Restart Mode displays the Batch definitions which are not executed successfully or either
has been interrupted during the previous Batch execution.
• The Rerun mode displays the Batch definitions which have been successfully executed, failed,
cancelled, or even interrupted during the previous Batch execution.
You can search for a specific Batch based on the Batch ID, Batch Description, Module, or Last Modified
Date. The pagination option helps you to view the list of existing Batches within the system.
On completion of batch execution, if the batch fails, a notification mail is sent to all users mapped to
the user group with the OPRMON role mapped to them.
2. Select the checkbox adjacent to the Batch ID which has to be executed. The specified task(s)
defined to the selected Batch are displayed in the Task Details section.
In the Batch Details tool bar, click Schedule Batch button to define new or modify the
pre-defined Batch Schedule. For more information, see Batch Scheduler.
In the Task Details tool bar, click Exclude/Include button to Exclude/Include a task, or
click Hold/Release button to hold or release a task before executing the Batch. For
more information, see Modify Task Definitions of a Batch.
3. Specify the Information Date (mandatory) by clicking (calendar) button. The specified date
is recorded for reference.
NOTE You can also modify the required task parameters of the
selected Batch and include the changes during the Batch rerun.
For more information, see Specify Task Details.
4. Click Execute Batch button and select OK in the information dialog to confirm Batch Execution.
An information dialog is displayed indicating that Batch Execution is triggered successfully.
2. Select the checkbox adjacent to the Batch ID which has to be executed. The specified Task(s)
defined to the selected Batch are displayed in the Task Details section.
In the Batch Details tool bar, click Schedule Batch button to define new or modify the
pre-defined Batch Schedule. For more information, see Batch Scheduler.
3. Select the Information Date from the drop-down list. This is a mandatory field.
4. Select the Batch Run ID (mandatory) from the drop-down list. This is a mandatory field.
In the Task Details tool bar, click Exclude/Include button to exclude or include a task,
or click Hold/Release button to hold or release a task before executing the Batch. For
more information, see Modify Task Definitions of a Batch.
NOTE The Tasks in a Batch which have failed during the execution
process are indicated in Red in the Task Details section. You
can modify the required task parameters in Specify Task
Details window and include the changes during the Batch
restart. Else, the tasks fail again during the Batch Restart.
5. Click Execute Batch button and select OK in the information dialog to confirm Batch Execution.
An information dialog is displayed indicating that Batch Execution is triggered successfully.
In the Batch Details tool bar, click Schedule Batch button to define new or modify the
pre-defined Batch Schedule. For more information, see Batch Scheduler.
3. Select the Information Date from the drop-down list. This is a mandatory field.
4. Select the Batch Run ID from the drop-down list. This is a mandatory field.
In the Task Details tool bar, click Exclude/Include button to exclude or include button a
task, or click Hold/Release button to hold or release a task before executing the Batch.
For more information, see Modify Task Definitions of a Batch.
NOTE You can also modify the required task parameters of the
selected Batch and include the changes during the Batch rerun.
For more information, see Specify Task Details.
5. Click Execute Batch button and select OK in the information dialog to confirm Batch Execution.
An information dialog is displayed indicating that Batch Execution is triggered successfully.
To exclude a task, select the required task from the Available Tasks list and click . You can
press Ctrl key for multiple selections.
To include an excluded task, select the required task from the Set Tasks list and click .
You can press Ctrl key for multiple selections.
To Hold a task, select the required task from the Available Tasks list and click . You can
press Ctrl key for multiple selections.
To release a held task, select the required task from the Set Tasks list and click . You can
press Ctrl key for multiple selections.
You can click Refresh button in the Server Time section to view the Current Sever Time while
defining a Batch schedule. You can search for a specific Batch based on the Batch ID Like, Batch
Description Like, Module, or Last Modified Date.
3. Select the Schedule option as one of the following, and specify the related details as tabulated.
The following table shows the Schedule Options and its Schedule Task Details.
Specify the Date on which the Batch has to be scheduled for processing
using the Calendar.
Enter the Run Time during which the Batch Scheduling should be run, in
Once (default option)
hours (hh) and minutes (mm) format.
Enter the number of Lag days which signifies the misdate when the Batch
is currently run. For the schedule type “Once” lag days is optional.
Specify the Dates, Start and End dates during which the Batch has to be
scheduled for processing using the Calendar.
Enter the Run Time during which the Batch Scheduling should be run, in
hours (hh) and minutes (mm) format.
Daily
Enter the number of Lag days which signifies the misdate when the Batch
is currently run.
Enter the frequency of Batch Run in the Every field as per the defined
schedule type. For example, Every 2 day(s)
Specify the Dates, Start and End dates during which the Batch has to be
scheduled for processing using the Calendar.
Enter the Run Time during which the Batch Scheduling should be run, in
hours (hh) and minutes (mm) format.
Enter the number of Lag days which signifies the misdate when the Batch
Weekly
is currently run.
Enter the frequency of Batch Run in the Every field as per the defined
schedule type. For example, Every 2 week(s).
Select the checkbox adjacent to the Days of the Week to specify the days
on which you need to run the Batch schedule.
Specify the Dates, Start and End dates during which the Batch has to be
scheduled for processing using the Calendar.
Enter the Run Time during which the Batch Scheduling should be run, in
hours (hh) and minutes (mm) format.
Enter the number of Lag days which signifies the misdate when the Batch
is currently run.
Select Interval option to enter the frequency of Batch Run in the Every
field or select Random to select the checkbox adjacent to Months on which
Monthly you need to run the Batch schedule.
Do one of the following:
Select Dates (default) option and enter the Dates of the Month on which
you need to run the Batch schedule. Also select the checkbox Include
Month’s Last Date to do so.
-Or-
Select Occurrence and specify the day of the week days and select the
specific weekday by clicking on the drop-down list.
4. Click button in the Existing Schedule toolbar. The details of the scheduled Batch are
displayed in the Batch Scheduler pane.
5. Modify the required details. You can modify the Start and End dates, Run Time, Lag days, and
other details depending on the Schedule Type selected. For more information, see Creating
Batch Schedule.
6. Click Save to save the modified details of an existing Batch schedule.
You can also do the following in the Existing Schedule section of the Batch Scheduler window:
Click button to view details of the selected Batch schedule. and buttons are
displayed.
You should have Batch Read Only User Role mapped to your User Group to monitor a Batch. The
Batch Monitor window displays a list of Batches with the other details such as Batch ID and Batch
Description.
You can search for a specific Batch based on Date range, Module, Status, and Batch Description. The
Batches listed in the Batch Details section can be sorted based on the current state as Successful,
Failed, Held, or New.
You can view and monitor the required Batch definitions and the corresponding task details. You can
also export the values in Microsoft Excel format for reference.
To monitor a Batch in the Batch Monitor window:
1. Select the checkbox adjacent to the Batch ID whose details are to be monitored.
You can also search for a specific Batch by using the Search option and filter the search results
by selecting the required Status as Successful, Failed, Held, or Not Started in the drop-down list.
2. Enter the Batch Run Details as tabulated.
The following table describes the fields in the Batch Run Details window.
Table 104: Fields in the Batch Run Details window and their Descriptions
Field Description
Select the information date from the drop-down list which consists of
Information Date
recently executed Batch Information dates.
Specify the refresh rate at which the latest Batch status details have to be
Monitor Refresh Rate fetched in seconds. You can enter a value between 5 to 999 seconds.
NOTE: The default value of Monitor Refresh Rate is set to 10 seconds.
Select the Batch Run ID from the drop-down list which consists of Batch ID’s
Batch Run ID
form which the Batch has been executed.
3. Click Start Monitoring button in the Batch Run Details tool bar.
The state of the selected Batch is monitored and status is displayed in the following order:
The Batch Status pane displays the Batch Run ID with the Batch Status as Successful,
Failed, Held, or Not Started.
Successful- Batch execution is successful.
Failed- Batch execution failed. A notification mail is sent to all users mapped to the
user groups with the OPRMON role mapped to them. The mail will show the exact task
status as Not Run, Excluded, Held, Interrupted, Indeterminate and Cancelled.
Held- Batch execution is put on hold.
Not Started- Batch execution has not started.
The Task Details section displays the executed task details such as Task ID, Task
Description, Metadata Value, Component ID, Task Status and Task Log. Click View Log link
to view the View Logger window. You can select the checkbox adjacent to the Task ID to
view the task component execution details in Event Log section.
The Event Log section displays the list of errors and events of the Batch being executed.
The events are displayed in the ascending order with the latest event being displayed at the
top. The Event log consists of:
Message ID, which is auto generated.
Description, which has the error details.
Severity, which can be Fatal, Inform, or Successful.
Time, which indicates the time of the event.
4. In the Batch Run Details tool bar, you can do the following:
5. In the Event Log tool bar, you can click Export button to export the event log details to
Microsoft Excel file for reference.
To view the status of the required Batch, in the Batch Processing Report window:
1. Select the Information Date from the drop-down list. The list consists of executed Batch
Information dates in the descending order with the latest Batch Run details being displayed at
the top.
2. Select the required Batch Status from the drop-down list. The available batch statuses are:
ALL
Not Started
Ongoing
Complete
Failed
Cancelled
The window is refreshed and displays the status of each executed component of the selected
Batch with the Task ID, defined Parameters, and the Status.
See the following table to know the available Status Codes of the task and their description.
I Interrupted - Task has been interrupted since ICC server was down.
In the Batch Cancellation window, you can do the following before cancelling a Batch/Task:
• In the Refresh Interval section, you can define the required Refresh Rate in seconds to fetch the
current status of Batches being executed.
Click Refresh button to refresh the window and fetch the current status of Batches being
executed.
• wIn the Legend section, you can refer to know the specific defined colors which are used to
indicate a particular state of a Task during Batch execution.
Indicates - On Going
Indicates - Successful
Indicates - Cancelled
2. Click Cancel Batch in the Batch Details tool bar. The selected Batch is cancelled from
processing and the results are displayed in a confirmation dialog. Click OK.
The Tasks associated with the cancelled Batch are also cancelled excluding the ongoing Tasks.
The cancelled Batch can be viewed in Restart and Rerun Batch list, within the Batch Execution
window.
2. Click Fetch Task Details in the Batch Details tool bar. The defined Task(s) are displayed in
the Task Details section.
NOTE The Cancel Task button will be disabled if you are not
mapped to TASKCANCEL function role.
The selected Task is cancelled from processing and the results are displayed in a confirmation dialog.
Click OK.
2. Click Abort Batch button in the Batch Details tool bar. The selected Batch is aborted from
processing and the results are displayed in a confirmation dialog. Click OK.
NOTE The Abort Batch button is disabled if you are not mapped
to OPRABORT function role.
The Tasks associated with the cancelled Batch are also cancelled including the ongoing Tasks. The
cancelled Batch can be viewed in Restart and Rerun Batch list within the Batch Execution window.
You should have Batch Read Only User Role mapped to your User Group to cancel a Batch.
The View Log window displays Task ID’s Information such as Component, Task Name, Task ID,
Process Type, Status, Start Date, End Date, Elapsed Time, User, Batch Run ID, As of Date, Process Step,
Records Processed, and Number of Errors for the respective Component Type selected.
Table 106: Fields in the Search View and Task window and their Descriptions
Field Description
Select the Component Type from the drop-down list. The available
component types are listed and based on the component type selected, the
Task ID details are displayed.
For example, if the component type is selected as Object Validation, then
Component Type
the Task ID Information section displays the Date, Component, Batch Run
ID, and Task ID.
Note: No Log records are displayed for some component types such as SQL
Rules. This is a limitation.
Select the date using the Calendar. This field is not applicable for some
As Of Date
component types.
Select the folder from the drop-down list. This field is not applicable for
Folder
some component types.
Field Description
• Select the required task from Available Task list and click .
You can also click button to deselect a Task from the selected list.
• Click OK.
This field is not applicable for some component types. Enter the user
User
details.
2. Click Search. The Task ID Information section displays the search results based on the
specified parameters.
NOTE There are differences in time stamp between View Log and
FSI_MESSAGE_LOG.
9.9 References
This section of the document consists of information related to intermediate actions that needs to be
performed while completing a task. The procedures are common to all the sections and are referenced
where ever required. You can refer to the following sections based on your need.
Property Description
Refers to the cube identifier as defined through the Business Metadata (Cube) menu
Cube Parameter
option. Select the cube code from the drop-down list.
Select the operation to be performed from the drop-down list. The available options
Operation
are ALL, GENDATAFILES, and GENPRNFILES.
Refers to the additional parameter that has to be processed during runtime. You can
Optional parameters specify the runsk value that should be processed as a runtime parameter during
execution. By default, the value is set to “null”.
Field Description
Refers to the cube identifier as defined through the Business Metadata (Cube) menu
Cube Parameter
option. Select the cube code from the drop-down list.
Refers to the operation to be performed. Select the required Operation from the
drop-down list. The options are:
• ALL – This option will execute BUILDDB and DLRU.
• BUILDDB – This option should be used to build the outline in Essbase Cube. The
outline is built based on the parentage file(s) contents.
• TUNEDB – This option should be used to analyze data and optimize cube
settings. For example, if you are trying to achieve the best block size, where 64K
bytes is the ideal size.
• PROCESSDB – This option will execute BUILDDB and DLRU, and is same as All
Operation option. Selecting this option will internally assign as ALL.
• DLRU – This option should be used to Load Data in the Essbase Cube and trigger
a Rollup.
• ROLLUP – ROLLUP refers to populating data in parent nodes based on
calculations (E.g. Addition). This option should be used to trigger just the
ROLLUP option where in the CALC scripts are executed. The same is applicable
for DLRU option also.
• VALIDATE – This option will validate the outline.
• DELDB – This option will delete the Essbase cube.
• OPTSTORE – This option will create the Optimized outline for the cube.
Field Description
Select the source from which the extract you want to execute is derived, from the
drop-down list.
Source Name
Sources defined from the Source Designer window of Data Management Tools are
displayed in the drop-down list.
Select the required extract name from the drop-down list. The list displays the Data
Extract Name Mapping definitions (T2F and H2F) defined on the selected source, from the Data
Mapping window.
Default Value
Field Description
Select the load mode from the drop-down list. The options are Table to Table and
File to Table.
Table to Table should be selected for Data Mapping definitions such as T2T, T2H,
Load Mode
H2T, H2H and L2H definitions.
File to Table should be selected for Data Mapping definitions such as F2T and F2H
definitions.
Select the required source on which the Data Mapping or Data File Mapping
Source Name definition you want to execute is defined, from the drop-down list.
Based on the selection of Load Mode, the list displays the corresponding sources.
Select the Data Mapping or Data File Mapping definition you want to execute, from
File Name the drop-down list. Based on the selected Load Mode and Source Name, the list
displays the corresponding definitions.
The data filename refers to the .dat file that exists in the database. Specifying Data
File Name is mandatory for Load Mode selected as File to Table and optional for
Data File Name Load Mode selected as File to Table. If the file name or the .dat file name is
incorrect, the task fails during execution.
In case of L2H, you can specify the WebLog name.
Default Value Used to pass values to the parameters defined in Load Data Definition.
You can pass multiple runtime parameters while defining a batch by specifying the
values separated by ‘comma’.
For example, $MIS_DATE=value,$RUNSKEY=value,[DLCY]=value and so on.
Note the following:
Field Description
Field Description
Field Description
Field Description
Refers to the model that has to be processed. This is a system generated code that is
Rule Name
assigned at the time of model definition.
The All definition for the Operation field conveys the process of extracting the data
from the flat files and applying the run regression on the data extracted.
Operation
For Batches that are being built for the first time the data will be extracted from the
flat files and the run regression will be applied on it.
Refers to the set of parameters specific to the model that has to be processed. This
set of parameters is automatically generated by the system at the time of definition.
Optional Parameters
You must NOT define a Model using the Define mode under Batch Scheduling. You
must define all models using the Modeling framework menu.
Field Description
Display the codes of the RRF Processes defined under the selected Infodom. Select
Process Code
the required Process from the drop-down list.
Display the codes of the Sub Processes available under the selected Process. Select
Sub Process Code
the required Sub Process from the drop-down list.
Select the required option from the drop-down list as “Yes” or “No”.
Build Flag refers to the pre-compiled rules, which are executed with the query stored
in database. While defining a Rule, you can make use of Build Flag to fasten the Rule
execution process by making use of existing technical metadata details wherein the
rule query is not rebuilt again during Rule execution.
Build Flag Built Flag status set to “No” indicates that the query statement is formed
dynamically retrieving the technical metadata details. If the Build Flag status is set to
“Yes” then the relevant metadata details required to form the rule query is stored in
database on “Save” of a Rule definition. When this rule is executed, database is
accessed to form the rule query based on stored metadata details, thus ensuring
performance enhancement during Rule execution. For more information, refer
Significance of Pre-Built Flag.
Field Description
Refers to the set of parameters which would behave as filter criteria for the merge
Optional Parameters
query.
Field Description
Rule Code Display the codes of the RRF Rules defined under the selected Infodom.
Select the required option from the drop-down list as “Yes” or “No”.
Build Flag refers to the pre-compiled rules, which are executed with the query stored
in database. While defining a Rule, you can make use of Build Flag to fasten the Rule
execution process by making use of existing technical metadata details wherein the
rule query is not rebuilt again during Rule execution.
Build Flag Built Flag status set to “No” indicates that the query statement is formed
dynamically retrieving the technical metadata details. If the Build Flag status is set to
“Yes” then the relevant metadata details required to form the rule query is stored in
database on “Save” of a Rule definition. When this rule is executed, database is
accessed to form the rule query based on stored metadata details, thus ensuring
performance enhancement during Rule execution. For more information, refer
Significance of Pre-Built Flag.
Refers to the set of parameters which would behave as filter criteria for the merge
Optional Parameters
query.
Property Description
Refers to the Data Quality Groups consisting of associated Data Quality Rule
DQ Group Name
definition(s). Select the required DQ Group from the drop-down list.
Specify the percentage of Rejection Threshold (%) limit in numeric value. This refers
to the maximum percentage of records that can be rejected in a job. If the
Rejection Threshold
percentage of failed records exceeds the Rejection Threshold, the job will fail. If the
field is left blank, the default the value is set to 100%.
Property Description
Specify the Additional Parameters as filtering criteria for execution in the pattern
Key#Data type#Value; Key#Data type#Value;…etc.
Here the Data type of the value should be “V” for Varchar/Char, or “D” for Date with
“MM/DD/YYYY” format, or “N” for numeric data. For example, if you want to filter
Additional Parameters some specific region codes, you can specify the Additional Parameters value as
$REGION_CODE#V#US;$CREATION_DATE#D#07/06/1983;$ACCOUNT
_BAL#N#10000.50;
Note: In case the Additional Parameters are not specified, the default value is
fetched from the corresponding table in configuration schema for execution.
Parameters Comma separated parameters where first value is considered as the threshold
percentage, followed by additional parameters which are a combination of three
tokens. Example, “90”,”PARAM1”,”D”,”VALUE1”,”PARAM2”,”V”,”VALUE2”.
Note: Parameter ‘Fail if threshold is breached” is defaulted to “Yes” for RRF
executions.
Optional Parameter For DQ Rule execution on Spark, specify EXECUTION_VENUE=Spark in this field.
Note that, you should have registered a cluster from DMT Configurations > Register
Cluster window with the following details:
• Name- Enter name of the Hive information domain.
• Description- Enter a description for the cluster.
• Livy Service URL- Enter the Livy Service URL used to connect to Spark from
OFSAA.
Field Description
Refers to the executable path on the DB Server. The Executable parameter contains
the executable name as well as the parameters to the executable. These executable
parameters have to be specified as they are specified at a command line. In other
words, the Executable parameter is the exact command line required to execute the
executable file.
The path to the executable has been entered in quotes. Quotes have to be used if
Executable the exe name has a space included in it. In other words, the details entered here
should look exactly as you would enter it in the command window while calling your
executable. The parameter value is case-sensitive. So, ensure that you take care of
the spaces, quotes, and case. Also, commas are not allowed while defining the
parameter value for executable.
To pass parameters like $RUNID, $PHID, $EXEID, $RUNSK to the RUN EXECUTABLE
component, specify RRFOPT=Y or rrfopt=y along with other executable details.
Field Description
When the file is being executed you have the choice to either wait till the execution
is completed or proceed with the next task.
Select Y (Yes) or N (No) from the drop-down list.
Wait • Y- Select this if you want to wait for the execution to be completed
• N- Select this if you wish to proceed.
If the task is using FICGEN/RUN EXECUTABLE component and there is no
precedence set for this task, then the WAIT should always be set to 'N'.
Y- Select Yes if you want to pass the Batch parameters to the shell script file being
executed.
• If Wait is selected as Y and Batch Parameter is selected as Y, following
parameters are passed to the executable:
NIL <BatchExeRunID> <ComponentId> <Task> <Infodate>
Batch Parameter <Infodom> <DatstoreType> <IPAddress>
• If Wait is selected as N and Batch Parameter is selected as Y, following
parameters are passed to the executable:
<BatchExeRunID> <ComponentId> <Task> <Infodate>
<Infodom> <DatstoreType> <IPAddress>
N- Select No if the Batch parameters should not be passed to the shell script.
This field will be considered only if you have specified RRFOPT=Y or rrfopt=y in the
Executable field.
Optional Parameters
Specify the optional parameters that you want to pass to the executable. For
example, $RUNID, $PHID, $EXEID, $RUNSK.
Field Description
Refers to the location where the SQL Rule definition resides. Click the drop-down list
Folder
box in the Value column to select the desired Folder.
Refers to the defined SQL rule. Click the drop-down list in the Value column to select
SQL Rule Name
the SQL Rule.
Field Description
Refers to the Data transformation name that was defined in the Post Load Changes
Rule Name window of Data Management Tools framework. Select the rule name from the drop-
down list.
Parameter List Note: Commas are used as delimiters for parameter values internally by the ICC
Batch component. Ensure that commas are not used in any of the parameter values,
that is, “a, b, c” should not be a parameter value in the list of parameter values being
passed to the TRANSFORM DATA task. For example, if the parameter values to this
task are required to be passed as (val1, val2, (a, b, c), val4), the correct way would be
to pass these values as (val1, val2, (a*b*c), val4). You can use any other character as
a separator.
Field Description
Refers to the variable shock that has to be processed. This is a system generated
Variable Shock Code
code that is assigned at the time of variable shock definition.
Refers to the operation to be performed. Click the drop-down list in the Value field
Operation to select the Operation. The available options are ALL, GENDATAFILES, and
GENPRNFILES.
Refers to Process ID and the User ID. Click in the text box adjacent to the Optional
Optional Parameters
Parameters field and enter the Process ID and User ID.
Field Description
Enter an object ID of your choice. This ID will appear as Entity ID in the Process
Object ID
Monitor window.
Select the workflow you want to execute from the drop-down list. It displays all the
Workflow
workflows defined in the Process Modeller window.
Field Description
Enter the value you want to pass to the Dynamic Parameters of the Run Task during
Optional Parameters
the execution of the workflow.
10 Questionnaire
The Questionnaire is an assessment tool that presents a set of questions to users and collects the
answers for analysis and conclusion. It can be interfaced or plugged into OFSAA application packs. For
example, the Enterprise Modeling Framework (EMF) application pack. It is role and permission-based,
and you can create a library of questions and use the library to create a questionnaire.
The topics discussed in this guide are specific to end-users. However, if you are looking for
information on configuring the Questionnaire, see the Oracle Financial Services Analytical
Applications Infrastructure Administration User Guide.
Click Go to start a search and click Reset to clear the Search fields.
Click Go to start a search and click Reset to clear the Search fields.
Table 120: Fields in the Basic and Advanced Search windows and their Descriptions
Field Description
Questions Library
ID Enter the system-generated identifier for the question. This is a unique value.
Select the category of classification for the question from the following
options:
Category • External
• IT
• Infrastructure
Field Description
Select the type of user-interface element that is displayed. For example, drop
Display Type down, text field, and so on. The options are available based on the Question
Type selected.
Status Select the status of the question. For example, Draft, Open, and so on.
Select the From date for the last update on the question to search in a date
Last Modified From
range.
Select the To date for the last update on the question to search in a date
Last Modified To
range.
Questionnaire Library
Select the status of the questionnaire. For example, Draft, Open, Pending,
Status
and In Review.
Select the From date for the last update on the questionnaire to search in a
Last Modified From
date range.
Select the To date for the last update on the questionnaire to search in a date
Last Modified To
range.
The window displays the list of defined Attributes. It also displays the OFSAA Application that is
interfaced to the Questionnaire module. For example, Financial Services Enterprise Modeling. Create,
modify, or delete Questionnaire Attributes from this window.
The following table describes the fields displayed on the Questionnaire Attributes Configuration
window.
Table 121: Fields in the Questionnaire Attributes Configuration window and their Descriptions
Field Description
Component Note: For information on configuring components, see the Oracle Financial
Services Advanced Analytical Applications Infrastructure Application Pack
Administration and Configuration Guide.
Displays the code of the attribute as entered in the Add Attribute window.
Attribute Code
Once defined, this code cannot be edited.
Attribute Name Displays the name of the attribute as entered in the Add Attribute window.
Displays the condition executed at run time to display attribute values used
Attribute Value
on the Create Questionnaire window.
Displays whether the attribute is mandatory or not. The values are Yes and
Is Mandatory
No.
Last Updated Displays the last updated date and time details for the attribute.
Selection Type Displays the Attribute Selection Type as entered in the Add Attribute window.
Field Description
Displays the number of Questionnaires that are linked to the Attribute, and
Associated Questionnaires
are in Open and Pending Approval status.
Search for existing questionnaire attributes based on the Component. For more information, see the
Use Search in the Questionnaire section.
2. Enter the details for the fields in the Add Attribute window.
The following table describes the fields in the Add Attribute window.
Table 122: Fields in the Add Attribute window and their Descriptions
Field Description
Field Description
Attribute Selection Select whether you want the attribute type to be a single-
Type selection or multiple-selection type attribute.
Field Description
Options displayed on the field below the attribute type field are
dynamic and vary based on the selection of the attribute type.
You can find the details in the following list.
Select from the following options:
• DropDown - selecting this attribute type displays a drop-
down Dimension Source with options that list dimension
tables acting as a source for the attribute being created.
Select from the following options:
Attr Dim Single
Attributes Dimension Composite
Note: The preceding drop down is displayed on the selection of
drop down as dimension and it is configurable. For information
on configuring dimension tables, see the Oracle Financial
Services Advanced Analytical Applications Infrastructure
Application Pack Administration and Configuration Guide.
• SQL Query - selecting this attribute type displays a text field
SQL Query where you have to enter a SQL Query to fetch
the data for the attribute being created. Format for SQL
(Headings for the field queries has to be given here with an example.
below Attribute Type
• Hierarchy- selecting this attribute type displays a drop down
field.)
Hierarchy Source with options that list hierarchy code acting
as a data source for the attribute being created.
• External - selecting this attribute type displays a text field
Web-Service URL where you have to enter a Web-Service
URL to fetch data for the attribute being created.
• Static - selecting this attribute type displays a drop down
Static Type with options that list static types to fetch data
for the attribute being created. Select from the following
options:
Is Default
Sign Off Type
Reassign Required
Is Confidential
Note: The preceding drop-down is displayed on the selection of
Attribute Type as static and it is configurable. For information on
how to configure it, see the Oracle Financial Services Advanced
Analytical Applications Infrastructure Application Pack
Administration and Configuration Guide.
Field Description
3. Click Save to save the questionnaire attribute or click Cancel to discard the changes and close
the window.
Edit the questionnaire attributes from this window. Follow these steps to edit a questionnaire attribute:
1. Select an Attribute from the Questionnaire Configuration window that you want to edit.
4. Click Save to save the edited questionnaire attribute or click Cancel to discard the changes and
close the window.
The window displays a list of defined Questions. Create, modify, copy, and delete Questions from this
window.
The following table describes the fields displayed on the Questions Library window.
Field Description
Displays the system generated identifier for the question. This is a unique
ID
value.
Displays the category of classification for the question from the following
Category
options: External, IT, and Infrastructure.
Displays the type of question from the following options: Single Choice,
Question Type
Multiple Choice, Free Text, Number, and Range.
Status Displays the status of the question. For example, Draft, Open, and so on.
Last Modified Displays the date and time for the last update on the question.
Search for existing questions based on ID and Question. For more information, see the Use Search in
the Questionnaire section.
Table 124: Fields in the Question Details window and their Descriptions
Field Description
Description Enter more details in the description of the question that you are creating.
Select the category of classification for the question that you are creating
from the drop-down options. For example:
• External – the question is of the category external.
• IT – the question is under the IT category.
Category
• Infrastructure – the question is in the infrastructure category.
Note: This field is optional and these options are an example from the OR
application. This field can be configured in the table
AAI_ABC_DIM_QTN_CATEGORY and its MLS table.
Select the type of user-interface elements for the question that you are
creating from the following drop-down options:
• Single Choice – select to create a single choice type of question.
• Multiple Choice – select to create a multiple choice type of question.
• Free Text – select to create a free text type of question.
• Number – select to create a type of question that requires a number
input.
• Range – select to create a type of question that requires input in a
Question Type defined range or a number input.
Note: When you select a Question Type option, details for the question type
display on the window. The instructions to enter the details are described in
the following subsections:
• Select Question Type – Single Choice
• Select Question Type – Multiple Choice
• Select Question Type – Free Text
• Select Question Type – Number
• Select Question Type – Range
3. Click Save Draft to save the details, or click Submit if you have entered all details and are
ready to submit. Click Close to discard the changes and close the window.
2. Enter the details for the fields in the Question Details window.
The following table describes the fields in the Question Details window.
Table 125: Fields in the Question Details window and their Descriptions
Field Description
Display as Radio
Select this option to display the answer choices in radio buttons.
Buttons
Select this option to make either the drop-down or radio buttons display
static answer choices.
After you select this option, you must enter the values that appear in the
static fields. Enter these values in the Response Options form appearing
below it. The following steps show the procedure to enter response options:
Static • Click Add Option and enter the answer choice in the text field. To
delete an option, select the checkbox on the option row that you want
to delete and click Delete Option .
• Similarly, you can add more options. These options will appear in the
choice of answers in either a drop-down or radio button format as
selected by you.
Field Description
Select this option to make either the drop-down or radio buttons display
dynamic answer choices.
After you select this option, you are presented with various text fields and
conditions options. Follow these steps to enter these values:
1. Enter the Primary Column from the database to fetch the answer from.
This could be the key.
2. Enter the Display Column from the database to display the answer in
the checkbox list or combo box.
3. Enter the table name where the Primary Column and the Display
Column exist in Reference Table.
4. Enter the filter criteria to apply to the table data being fetched to
display in Filter Condition. This step is optional.
5. Click Validate to validate the query formed by these steps. On
Dynamic validation, the Preview Options drop-down appears.
6. Enter the Option Type Column name in the Advanced section. The
value entered here appears in the Option Type Column in the
Conditions section.
7. Click Add in the Conditions section and enter a name for the answer
choice in the Name text field. Select a condition from the Condition
drop down. For example, Not Equal To. Enter the required data in the
Option Value Type.field. Select either Static or Dynamic from the
Scope drop-down. If you select Dynamic, then you must enter a
subquery to filter the options further. To delete a condition, select the
checkbox on the condition row that you want to delete and click
Delete .
8. Similarly, you can add more conditions. These conditions will appear in
the choice of answers in either a checkbox list or a combo box as
selected by you.
3. Click Save Draft to save the details or click Submit if you have entered all details and are
ready to submit. Click Close to discard the changes and close the window.
Table 126: Fields in the Question Details window and their Descriptions
Field Description
Select this option to display the multiple choice answers in a combo box
Display as a Combo Box
list.
Select this option to make either the checkbox list or combo box display
static answer choices.
After you select this option, you must enter the values that appear in the
static fields. Enter these values in the Response Options form appearing
below it. To enter response options, click Add Option and enter the
Static answer choice in the text field. To delete an option, select the checkbox
on the option row that you want to delete and click Delete Option .
Similarly, you can add more options. These options will appear in the
choice of answers in either a checkbox list or checkbox format as
selected by you.
Select this option to make the checkbox list or combo box display
dynamic answer choices.
After you select this option, you are presented with various text fields
and conditions options. Enter these values as described in the following
steps:
1. Enter the Primary Column from the database to fetch the answer
from. This could be the key.
2. Enter the Display Column from the database to display the answer
in the drop-down or the radio buttons.
3. Enter the table name where the Primary Column and the Display
Column exist in Reference Table.
4. Enter the filter criteria to apply to the table data being fetched to
display in Filter Condition. This step is optional.
5. Click Validate to validate the query formed by these steps. On
Dynamic validation, the Preview Options drop-down appears.
6. Enter the Option Type Column name in the Advanced section. The
value entered here appears in the Option Type Column in the
Conditions section.
7. Click Add in the Conditions section and enter a name for the
answer choice in the Name text field. Select a condition from the
Condition drop-down. For example, Not Equal To. Enter the
required data in Option Value Type. Select either Static or
Dynamic from the Scope drop down. If you select Dynamic, then
you must enter a subquery to filter the options further. To delete a
condition, select the checkbox on the condition row that you want
to delete and click Delete .
8. Similarly, you can add more conditions. These conditions will
appear in the choice of answers in either a drop down or radio
button format as selected by you.
3. Click Save Draft to save the details or click Submit if you have entered all details and are
ready to submit. Click Close to discard the changes and close the window.
Table 127: Fields in the Free Text pane and their Descriptions
Field Description
Display as Text Area Select this option to input the answer in a text area.
Question to be used
while defining DT Select Yes or No to apply Decision Tree logic to the question.
Logic?
3. Click Save Draft to save the details or click Submit if you have entered all details and are
ready to submit. Click Close to discard the changes and close the window.
100 and Range 2 from 100 to 200, since the upper limit of Range 1 (100) overlaps with the lower limit
of Range 2 (100).
Follow these steps to add the details:
1. Click Range from Questions Type to display the Range section in the Question Details
window.
2. Enter the details for the fields in the Range pane.
The following table describes the fields in the Range pane.
Field Description
Select this option to display a drop-down list of range values for the
Display as Range of Values answer. Define the range in the Add Option Delete Option section.
Note: This option is selected by default.
Display as a Number Select this option to input the answer in number format.
Add options in this section for the Range of Values that you want to be
available as the list of answers for the question.
To enter the range values, click Add Option and enter the range in the
Add Option/Delete Option Lower Limit and Upper Limit fields. To delete an option, select the
for Range of Values checkbox on the option row that you want to delete and click Delete
Option .
Similarly, you can add more range value options. These options will
appear in the choice of answers in a list of range values.
3. Click Save Draft to save the details or click Submit if you have entered all details and are
ready to submit. Click Close to discard the changes and close the window.
2. Click Edit to enable editing the question in the Questions Details window.
3. Enter the details for the fields in the Question Details window. See the Field Description table in
Create the Questions in the Library section for field details.
4. Click Update to save the modified question. Click Submit after you are ready to submit
the edited question. Click Close to discard the changes and close the window.
Follow these steps to copy a question and create a new question from the Questions Library window:
1. Click Select to select a Question from the Questions Library window.
Field Description
Field Description
Displays the status of the questionnaire. For example, Draft, Open, and
Status
so on.
Displays the date and time for the last modified action on the
Last Modified
questionnaire.
Note: For more details on the Questionnaire, see the Define the Questionnaires section.
This window displays a list of existing Questionnaires. Create, modify, copy, and delete Questionnaires
from this window.
The following table describes the fields displayed on the Questionnaire Attributes Configuration
window
Table 130: Fields in the Questionnaire Attributes Configuration window and their Descriptions
Field Description
Displays the date and time for the last update on the
Last Modified
questionnaire.
Search for existing questionnaires based on ID and Name. For more information, see Use Search in
the Questionnaire section.
Table 131: Fields in the Questionnaire Details window and their Description
Field Description
3. Click Save Draft to create the Questionnaire and save the details.
4. After you have entered the details discussed in the preceding table, you must create sections
and link questions to the sections. For simplicity, the topic is discussed in subsections within
this section. Click Edit and see the following sections for instructions:
Creating a Section in a Questionnaire
Linking a Question to a Questionnaire
Configuring the Questions in a Section
Rearranging the Sequence of Sections and Questions
Delinking a Question to a Questionnaire
Attaching URLs to a Questionnaire Section
Editing a Section in a Questionnaire
Deleting a Section in a Questionnaire
Wrapping and Unwrapping Sections in a Questionnaire
5. Click Submit after you have entered all details and are ready to submit. Click Close to
discard the changes and to close the window. The Questionnaire moves from Draft to Pending
Approval status, and an approver has to approve to move it to Open status.
For more information, see Approving Questionnaires.
NOTE You can link only Questions that are in Open status.
1. Click Edit to enable editing the questionnaire in the Questionnaire Details window.
2. Click Link Question to display the Link Questions window. For more information on the
fields displayed on this window, see the Define Questions section
3. Click Select to select a Question from the Link Questions window.
4. Click Link to display a message pop-up window. Click OK to link the question to the
questionnaire. Click Close to close the window.
Field Description
□ (checkbox) Select and click Edit Linked Question to view and edit the Response
Options in a linked question.
Displays the type of user interface elements for the question from the
following options:
• Single Choice
• Multiple Choice
Question Type • Free Text
• Number
• Range
Note: For more information, see the Creating Questions in the Library
section.
Enter the comparative value to apply weight function to the question. The
sum of all the weight values should be 100. For example, if you have three
questions A, B, and C. You assign question ‘A’ a weight value of 35 and
question ‘B’ a weight value of 45, then you will have to assign weight
Weightage value of 20 to question ‘C’.
Note: This field is displayed if you have selected the Type as Score Based.
This field cannot be edited if you have linked Questions where the
Question Type is either Free Text or Number.
Field Description
Displays whether the question requires a comment for the answer. The
default value is selected as required. You can remove the selection if
Is Comment Required? required.
Note: This field is not displayed if the Questionnaire Type is a Decision
Tree.
2. Click Edit Linked Question to view and edit the Response Options for a question.
The following table describes the fields in the Response Option.
Field Description
□ (checkbox) Select a response option from the list to perform various actions.
Response Options
Selected Logic Click the button to display the Show Logic window.
3. Click Save to save the entries, or click Close to close the response options section.
NOTE The section numbers are in the header rows below the section
names as shown in the following illustration:
Another option is to use the Up and Down in the Sequence column. Click the buttons
for the section that you want to move up or down.
2. Click Save Sequence to save the sequence rearrangement or click Close to discard and
close the window.
1. Click Edit to enable editing the questionnaire in the Questionnaire Details window.
2. Click Select to select a Question from the section.
3. Click Delink Question to display the delink confirmation pop-up window. Expand the
section if it is collapsed, to view the Delink Question at the top.
4. Click OK to delink the question or click Cancel to discard and close the pop-up window.
1. Click Edit to enable editing the questionnaire in the Questionnaire Details window.
2. Click Add URL to display the Add URL pop-up window. Expand the section if it is collapsed,
to view the Add URL at the top.
3. Enter the details for the fields in the Add URL pop-up window
The following tables describes the fields in the Add URL window.
Table 134: Fields in the Add URL window and their Descriptions
Field Description
Select the type of entity that the URL is being linked to. The options are:
Entity Type Section
Questions
Select the Question that the URL is to be linked to. This drop down is
Question
enabled on selecting Question for Entity Type.
4. Click Save to add the URL and repeat the process to add another URL. Click Close when done.
The added URLs are displayed in the URL section. Attach URLs to the questionnaire here. Click
Attach URL(s) to attach URLs to the Questionnaire. To delete a URL, select a URL and
click Delete .
Follow these steps to attach a URL to a Questionnaire using the Attach URLs from the URL section:
1. Click Attach URL(s) from the URL section in the Questionnaire Details window. The Attach
URL pop-up window is displayed.
2. Enter the details for the fields in the pop-up window.
The following table describes the fields in the Attach URL window.
Table 135: Fields in the Attach URL window and their Descriptions
Field Description
Questionnaire Name Displays the name of the questionnaire. This is a read-only field.
3. Click Save to attach the URL and repeat the process to attach another URL. Click Close when
done. The added URLs are displayed in the URL section in the Questionnaire Details window. To
delete a URL, select a URL and click Delete .
1. Click Edit to enable editing the questionnaire in the Questionnaire Details window.
2. Click Edit Section . The section name field is active. Expand the section if it is collapsed, to
view the Edit Section button at the top.
3. Enter the change in the Section Name field and click Save Section to save the details.
4. Click Update to save the modified questionnaire. Click Submit after you are ready to
submit the edited questionnaire.
Click Close to discard the changes and close the window.
1. Click Edit to enable editing the questionnaire in the Questionnaire Details window.
2. Click Delete Section to display the delete confirmation pop-up window. Expand the section
if it is collapsed, to view the Delete Section button at the top.
3. Click OK to delete the question or click Cancel to discard and close the pop-up window.
5. Click Edit and update the Questionnaire, if required. Click Approve to approve and move
the Questionnaire to Open status. Click Reject if you have to recommend changes. The
Questionnaire moves into the Draft status and goes back to the user’s view in the
Questionnaire Library.
2. Click Edit to enable editing the questionnaire in the Questionnaire Details window.
3. Enter the details for the fields in the Questionnaire Details window.
See the field description table in Creating the Questionnaire in the Library section for field
details.
4. Click Update to save the modified questionnaire. Click Submit after you are ready to
submit the edited questionnaire. Click Close to discard the changes and close the window.
4. Click Update to save the modified questionnaire. Click Submit after you are ready to
submit the edited questionnaire. The Questionnaire moves to the Open status if there’s no
approval required. However, if approval is required, then the Questionnaire moves to Pending
Approval status. See Approving Questionnaires for more details. Click Close to discard the
changes and close the window.
NOTE Oracle recommends creating a "SMS Auth Only" user from the
User Maintenance window for the service account rather than
using SYSADMN.
You (System Administrator) need to have full access rights to ftpshare folder with appropriate User ID
and password to add and modify the server details.
Click from the header to display the Administration tools in Tiles menu. Click System
Configuration from the Tiles menu to view a submenu list. Click Configure Database Server to view
the Database Server Details window.
By default, the Database Server Details window displays the pre-configured database server details. In
order to add or modify the database server details, you need to ensure that:
• The FTP/SFTP service should be installed on the Web/Application and DB Server.
• The FTP/SFTP ID for Web/App and DB server has to be created through the Computer
Management option under Administrative Tools for all the installations other than UNIX
installations.
• This user should belong to the administrator group.
• The FTP/SFTP password for Web/App and DB server needs to be specified in the Computer
Management option under Administrative Tools. Also, the Password Never Expires option has
to be checked.
• If the User enters an incorrect username, password, FTP Share and/or Port and clicks Save, the
following alert message is displayed.
Password or ShareName incorrect on XXXXXXXXXXXXXXXXXXXcom on port X2
NOTE Few of the fields in Database Server details are auto populated
based on the options specified during application installation
and are not editable
The following table describes the fields in the Database Server Details window.
Table 136: Fields in the Database Server Details window and their Descriptions
Field Description
Field Description
FTP refers to the transfer of files such as metadata and staging files
from one server to another. SFTP refers to secure FTP for transfer of
files from one server to another. LOCAL is selected to transfer files
within the same server.
Note the following:
• The FTP / SFTP option specified during setup is auto populated and
is not editable.
• The FTP/SFTP information should be created manually, prior to
entering the details. The application validates the information
ensuring that the value in FTP/SFTP and Host DB is not blank.
FTP/SFTP/LOCAL • When there is a change to the FTP/SFTP path, the old files should
be physically moved to the new path. The system ensures that all
new files are generated /transferred into the new path.
• The Radio Button LOCAL is available on OFSAAI 8.0.6.1.0 and later
release versions.
• The FTP of the Database Server, Application Server, and the Web
Server must be the same. For example, if you select SFTP for the
Database Server, repeat the same selection for the Application
Server and the Web Server too.
• At any time, if you modify the existing FTP selection, ensure that
you resave so that the changes take effect.
The following table describes the fields in the Technical Metadata and Business Metadata tabs.
Table 137: Fields in the Technical Metadata and Business Metadata and their Descriptions
Field Description
Enter the password which is same as the specified password for FTP/SFTP
Password user ID by the administrator.
Note: The password is represented by asterisk (*) for security reasons.
Table 138: Fields in the Security Details tab and their Descriptions
Field Description
Enter the user ID which has the same user rights as the user who installed
Infrastructure.
Security User ID
The Application server validates the database user Id / Password to the
database server(s) for connection purposes.
Specify the password for the user who would be accessing the security
Security Password share name. The password is represented by asterisk (*) for security
reasons.
Enter the path locating the DB components installation folder which has
Security Share Name been specified by the user who has installed the infrastructure system. For
example: D:\Infrastructure
By default the Application Server Details (Server Master) window displays the pre-configured
application server details in the View mode.
The Application Server Details window is displayed in the Add mode when accessed for the first time
during the installation process to enter the application server setup details. Subsequently the window
is displayed in View mode providing option to only update the defined application server details.
If the User enters an incorrect username, password, FTP Share and/or Port and clicks Save, the
following alert message is displayed.
Password or ShareName incorrect on XXXXXXXXXXXXXXXXXXXcom on port X2
NOTE The data in some of the fields are auto populated with the pre-
defined Application Server details. Ensure that you edit only the
required fields.
The following table describes the fields in the Application Server Details window.
Table 139: Fields in the Application Server Details window and their Descriptions
Field Description
3. Enter the FTP details in the Technical Metadata, Business Metadata, and Staging Area tabs as
tabulated. The Technical Metadata tab is selected by default and the details specified here are
replicated as default values to Business Metadata, and Staging Area tabs.
The following table describes the fields in the Technical Metadata, Business Metadata, and Staging
Area tabs.
Table 140: Fields in the Technical Metadata, Business Metadata, and Staging Area tabs and their
Descriptions
Field Description
Specify the new physical path of the FTP/SFTP shared directory/Drive. For
Drive
example: e:\dbftp\
Enter the password which is same as the specified password for SFTP user
Password ID by the administrator.
The password is represented by asterisk (*) for security reasons.
Table 141: Fields in the Web Server Details window and their Descriptions
Field Description
Servlet Port Specify the web server port number. For example: 21
Specify the local path (location) where the static files need to be copied in
the primary server. For example: e:\revftp\
The static files such as Infrastructure OBIEE reporting server pages are
Local Path
copied to the specified location.
Note: The web server Unix user must have read/write privileges on the
Local Path directory. If not, contact your system administrator.
Select the protocol as either HTTP or HTTPS from the drop-down list.
Infrastructure supports FTP/SFTP into Web Server and streaming of files.
Protocol In case, FTP/SFTP is not allowed in a Web Server due to security reasons,
system can stream the data across Web Servers so that the Client need
not compromise on their Security policy.
Field Description
3. (Optional) If you have selected the FTP Enabled checkbox, you can specify the Drive, Port
Number, and user details in the FTP details pane. Select the option as either FTP, SFTP or
LOCAL and enter the other details as tabulated.
The following tables describes the fields in the FTP Details pane.
Field Description
FTP Details
PublicKey Auth Enter details for Private Key Path and Passphrase.
Enter the Private Key Path that is used to perform the FTP/SFTP in the
Private Key Path
database server. This is a mandatory field.
Passphrase Enter the passphrase to access the database server for FTP/SFTP.
You can view the various databases defined for the database server. The Database Master window
allows you to add a new database and modify the existing ones.
Table 143: Fields in the Database Master window and their Descriptions
Field Description
Enter the database Name. Ensure that there are no special characters and
extra spaces.
Name Note that, for Oracle database, the TNS (Transparent Network Substrate)
database name should be same as SID.
The Name should not exceed 20 characters.
Enter the Schema name for the database. Ensure to use only lower case or
Schema Name
upper case alphabets. Schema name does not support mixed case names.
Field Description
Select the authentication type from the drop-down list. Based on the Database
you have selected, the drop-down list displays the supported authentication
mechanisms.
Select Default for DB2UDB, ORACLE, and MSSQL databases.
Connection Details
This field is not applicable for HIVE DB with Auth Type as Default.
Select the Alias name (connection) used to access the database from the drop-
down list.
This field is applicable only for ORACLE DB with Auth Type as Default.
TNS is the SQL*Net configuration file that defines database address to
TNS Entry String
establish connection.
Enter the TNSNAME created for the Information Domain.
Enter the date format used in the Database server. You can find this in
Date Format nls_date_format entry for the database. This date format will be used in all the
applications using date fields.
Field Description
The default JDBC Connection String is auto populated based on the database
type selected. This is the JDBC (Java Database Connectivity) URL configured
by the administrator to connect to the database.
• For ORACLE DB type it is jdbc:oracle:thin:@<<DB Server Name>>:<<Port
Number>>:<<Oracle SID>>
• For MSSQL DB type it is jdbc:microsoft:sqlserver://<<DB Server
JDBC Connection Name>>:<<Port Number>>
String
• For DB2 DB type it is jdbc:db2://<<DB Server Name>>:<<Port
Number>>/<<Database Name>>
• For HIVE DB type, it is jdbc:hive2://<<DB Server Name>>:10000/default
You need to specify the appropriate details corresponding to the information
suggested in brackets. For example, in ORACLE DB you can specify the Port
number as 1521 and the SID as ORCL.
The default JDBC Driver Name is auto populated based on the database type
selected.
• For ORACLE DB type it is oracle.jdbc.driver.OracleDriver.
• For MSSQL DB type it is com.microsoft.jdbc.sqlserver.SQLServerDriver.
• For DB2 DB type, it is com.ibm.db2.jcc.DB2Driver.
JDBC Driver Name
• For Hive with Auth type as Kerberos with Keytab, it is
com.cloudera.hive.jdbc4.HS2Driver.
In case of modification, ensure that the specified driver name is valid since the
system does not validate the Driver Name.
Multiple data sources pointing to different Hive servers are not supported.
KERBEROS KDC This field is applicable for Authentication Type selected as KERBEROS.
Name Enter the name of Kerberos Key Distribution Center (KDC).
KERBEROS REALM This field is applicable for Authentication Type selected as KERBEROS.
Name Enter the name of the Kerberos Realm file.
NOTE The database date when modified does not get auto updated.
You need to manually update the date in the database
parameters of NLS_DATE_FORMAT file and restart the DB. Also
the to_date function translation is not performed during the
data load.
Once you have updated all the required information, click Save to save the Database Details.
By default the OLAP Details window displays the pre-configured server details specified during the
installation.
Table 144: Fields in the OLAP Details window and their Descriptions
Field Description
Field Description
Select the OLAP database type from the drop-down list. The available
options:
• SQLOLAP
• ESSBASE
• EXPRESS
• DB2OLAP
• ORACLE
Note the following while selecting the OLAP DB type:
Type • By selecting ESSBASE and DB2OLAP, you need to specify different user
id and password for Cube Creation and Cube Viewing to avoid locking
of the cube when the cube is being built.
• By selecting SQLOLAP and EXPRESS, you need to specify one set of
user id and password common for both Cube Creation and Cube
Viewing.
• By selecting ORACLE, you need not specify user id and password for
Cube Creation and Cube Viewing.
In the same server, Multiple OLAP types can be installed in the same server
and configured in OFSAAI.
3. Specify the User ID and Password in the For Cube Creation section, based on the selected
OLAP DB Type. Ensure that User ID should not have any special characters or extra spaces and
it should not exceed 16 characters.
For SQLOLAP, the User ID should be created in Microsoft Windows with appropriate
privileges for cube creation.
For EXPRESS, the User ID should be created in EXPRESS with appropriate privileges for cube
creation.
4. Specify the User ID and Password For Cube Viewing, based on the selected OLAP DB Type.
Ensure that there are no special characters and extra spaces.
Enter the FIV User ID to view the cube. If ESSBASE is selected as the database type, the cube
can be viewed in OBIEE reporting server.
5. Click Save to save the OLAP Details.
NOTE The Instance Access Token feature is available from OFS AAI
v8.0.8.3.0 and later versions.
• Ability to generate the Instance Access Token multiple times, if previous token is misplaced or
lost.
• Endpoint to generate the Unique Transaction Token requests based on the input of Instance
Name and Instance Access Token.
3. Click Add.
The token expiry time can be configured in the Configuration UI. Specify the expiry limit of the token
in the API token validity in seconds field. By default, the One-Time Token validity is set to one hour.
3. Search for the required application domain for which you want to switch the authentication
scheme, click Name from the search results to display the details for the application domain.
The Search Results are displayed. The REST APIs required for OFSAA is highlighted as
displayed in the figure.
6. Click the Edit icon.
7. Modify the Protection Level from Protected to Excluded.
To enable token based authentication for REST APIs, rather with basic authentication, you must
change Protection Level from Protected to Excluded.
8. Click Apply to save.
The GET API generates a One-Time Access Token as response in the JSON format as follows:
API call: /rest-api/auth/v1/token:
Response:
{
"token_type": "Bearer",
"expires_in": 3600,
"token":
"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI5ZDljZWU4YS0zOGJmLTRkMjMt
OTU1ZC1kMTU5ODA2YTk5NzciLCJpc3MiOiJSU19RVE4iLCJhdWQiOiJPRlNBQSIsInN1YiI6Il
JTX1FUTiIsImlhdCI6MTYwNDk4NzU1OCwiZXhwIjoxNjA0OTkxMTU4fQ.WcxtP3A0NJa4U5bjD
_D8GQzzMd77pI4woW2Of11bxNMXnGM8jJUEI6msD81wayfs7Oemimv6SR4PGgln6xT_ylLXIcL
5qgSBqHifY-
Jb325gvKEMwize97SDEmLNhxz9x9dB5xvUguKIZsXz7CGK1aY8HPTdM4IZBZLHHccJIvgf0arE
3EeZtURdaycT9RbPYZvyyFW-ODK-NKSWATnbCmLVb-
CDZjcaO5KToX_ZXQIOmerWz2Wcj0wS8khceNq_zw-2O5cSAFrH15W0uyDWNLJd-
giT7sAXBi3oChxQ4Ms1qM7IB9xdVw44t0VGWrZfr5C-Yq3BGpkH_qix8R_r_A"
}
To invoke your REST API using the bearer token, refer to the following sample:
Curl Command for logging in the REST API access through bearer token
curl --location --request POST 'http://whf00pfs:8092/ofsa/rest-
api/idm/service/login' \
--header 'Authorization: Bearer
eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI5ZDljZWU4YS0zOGJmLTRkMjMtO
TU1ZC1kMTU5ODA2YTk5NzciLCJpc3MiOiJSU19RVE4iLCJhdWQiOiJPRlNBQSIsInN1YiI6IlJ
TX1FUTiIsImlhdCI6MTYwNDk4NzU1OCwiZXhwIjoxNjA0OTkxMTU4fQ.WcxtP3A0NJa4U5bjD_
D8GQzzMd77pI4woW2Of11bxNMXnGM8jJUEI6msD81wayfs7Oemimv6SR4PGgln6xT_ylLXIcL5
qgSBqHifY-
Jb325gvKEMwize97SDEmLNhxz9x9dB5xvUguKIZsXz7CGK1aY8HPTdM4IZBZLHHccJIvgf0arE
3EeZtURdaycT9RbPYZvyyFW-ODK-NKSWATnbCmLVb-
CDZjcaO5KToX_ZXQIOmerWz2Wcj0wS8khceNq_zw-2O5cSAFrH15W0uyDWNLJd-
giT7sAXBi3oChxQ4Ms1qM7IB9xdVw44t0VGWrZfr5C-Yq3BGpkH_qix8R_r_A'
By default the Information Domain Maintenance window displays the pre-configured Information
Domain details and allows you to add, modify, and delete Information Domains.
Figure 250: Fields in the Information Domain Details pane and their Descriptions
Field Description
Enter the name of the Information Domain. Ensure that the name specified
Name is of minimum 6 characters long and does not contain any special
characters or extra spaces.
Is authorization required Select the checkbox if user authorization is required to access Business
for Business Metadata? Metadata.
Is this Staging Select the checkbox if you are creating a Staging/Temporary Information
Information Domain? Domain.
Table 145: Fields in the Database Details for DB Server pane and their Descriptions
Field Description
Select the database server from the drop-down list. The list contains all the
Database Server
defined database servers.
Select the database name from the drop-down list. The list contains all the
Database Name
database names contained within the server.
Select the OLAP server from the drop-down list. The list contains all the
OLAP Server
servers defined in OLAP Details.
Select OLAP Type from the drop-down list. The available options are:
ESSBASE
OLAP Type
ORACLE
SQAOLAP
4. Click Next.
5. Specify the file location path of erwin, Log, and Scripts file on the application server. For
example, an erwin file path could be /oracle/app73/ftpshare/<infodom>/erwin.
erwin file stores TFM and Database Model XML files.
Log file stores the Log data for all the Backend and Front-end components.
Script file stores Table Creation scripts.
6. Specify the file location path of erwin, Log, and Scripts file on the database server.
For example, an erwin file path could be /home/db73/ftpshare/<infodom>/erwin.
The specified details provided for the database and application server details will be mapped to
the Information Domain. A consolidated data would be stored in the DSNMASTER table in the
Config Schema database.
7. Select the Meta Database Server from the drop-down list. This is the database server of the
Metadom Schema.
8. Enter the Database Name of the Metadata Schema.
9. Click Save to save the Information Domain details.
After creating Information Domain successfully, add persistence unit entry and replace the
$JNDI_KEY_FOR_SERVER_TYPE in GRCpersistence.xml file present under
$FIC_WEB_HOME/webroot/WEB-INF/classes/META-INF folder.
The value for JNDI_KEY_FOR_SERVER_TYPE will vary based on the webserver type.
Similarly add persistence unit entry to persistence.xml file present under
$FIC_DB_HOME/conf/META-INF folder.
On creating an Information Domain a list of objects are created using the script files.
NOTE You need to manually drop the Atomic Schema/ objects in the
schema upon deletion of INFODOM.
11.1.11 Configuration
Configuration is the process of defining the System Accessibility Components of an Information
System. Configuration in the System Configuration Section enables you (System Administrator) to
define and maintain the User Accessibility Details within the Infrastructure System.
You (System Administrator) must have the SYSADM Function Role mapped to your role to access and
modify the Configuration details. Click Administration from the Header to display the
Administration Tools in a Tiles Menu. Click System Configuration from the Tiles Menu to view a
submenu list and click Configure System Configuration to view the Configuration Window.
Alternatively, you can click the Navigation Button to access the Navigation List. Click System
Configuration, and click Configure System Configuration to view the Configuration Window.
The Configuration Window consists of the sections: General Details, Guest Login Details, Optimization,
and Others. By default, the General Details Window is displayed with the pre-configured details of the
Server and Database that you are currently working on and allows you to modify the required
information.
Field Description
This field is not applicable if you select the SSO Enabled check box.
Number of invalid
logins Enter the number of attempts permitted to a user to enter incorrect passwords after
which the user account will be disabled.
Enter the permitted duration of inactivity after which the session will be automatically
timed out and the user will be requested to login again.
Note the following:
• The session timeout value should be atleast or more than 10 minutes.
Session Timeout Value
(in minutes) • The session time out depends on the specified Session Timeout Value and web
server internal session maintenance. It may vary for different web servers.
• If SSO authentication is selected, ensure you set the Session Timeout Value
equivalent to the configured server session time to avoid improper application
behavior after session expired..
Enter the time in the session at which a popup must appear and display a timer that
shows the time remaining for the session to end.
Session Timeout
Popup Interval (in For example, if you enter 50 minutes to the Session Timeout Value and enter 5
minutes) minutes to the Session Timeout Popup Interval, the popup appears on the screen
after 45 minutes of inactivity and displays the timer (starts from 5 minutes and ends
at 0) for the session timeout.
Enter the System Environment Details such as Development, UAT, Production, and
Environment Details so on. The information is displayed in the Application’s top banner as the “In Setup”
information.
Select this check box to enable SSO Authentication & SMS Authorization.
Note: If SSO is enabled, then you must configure the SSO URL for Referer Header
SSO Enabled Validation.
For more information, see the Configure Referer Header Validation Section in the
OFSAA Security Guide.
Select to enable Token-based Authentication for the REST APIs to authenticate the
Enable native password.
authentication for For more information, see the Using REST APIs for User Management from Third-
REST API Party IDMs Section in the Oracle Financial Services Advanced Analytical Applications
Infrastructure Administration Guide.
Field Description
Identity • Enter the fully qualified domain URL used to access the
Provider URL Identity Provider.
• This is an optional field and only required if IDP URL for login
and logout are different. In case this field is not configured
then “Identity Provider URL” will be used for both login and
logout requests.
• The following is an example for IDCS:
https://<IDCS_URL>/fed/v1/idp/sso
Field Description
Field Description
Select the required authentication type from the drop-down list. The options are :
• SMS Authentication and Authorization
Field Description
Select to enable Just in time (JIT) provisioning which synchronizes the User, Group,
and User-Group mapping in external systems such as LDAP, SAML, and SSO into
OFSAA when a User logs in.
NOTE:
• JIT Provisioning is available on 8.1.1.2.0 and later versions. However, to enable it
in the 8.1.1.1.0 version, apply the 33067589 One-Off Patch from My Oracle
Support.
• JIT Provisioning is available on 8.1.2.0.0 and later versions. However, to enable it
in the 8.1.2.0.0 version, apply the 34019691 One-Off Patch from My Oracle
Support.
• JIT Provisioning is available on 8.1.2.1.0 version and further Maintenance
Releases.
• Update the Group Domain Mapping in OFSAA when you create in LDAP, SAML,
or SSO.
JIT Provisioning
• Configure the User Group Details in the LDAP Group Details Section if you select
Enabled
LDAP.
• For SAML, configure the attribute name user_groups for IDCS.
• For SSO, send the mapped groups in the header with the user_groups key.
• For SAML, configure the following attributes:
user_groups
user_email
user_name
• For SSO, configure the following headers:
user_groups
(To add more than one User Group, specify the User Groups separated
by commas.)
user_email
user_name
Before you select this check box in the UI, ensure that the JIT Provisioning Enabled
check box is selected to establish a connection with the External System.
Select to enable the unmap operation of the User Groups from the External System to
OFSAA during login.
NOTE:
• JIT Provisioning is available on 8.1.1.2.0 and later versions. However, to enable it
Enable JIT Unmapping in the 8.1.1.1.0 version, apply the 33067589 One-Off Patch from My Oracle
Operation Support.
• JIT Provisioning is available on 8.1.2.0.0 and later versions. However, to enable it
in the 8.1.2.0.0 version, apply the 34019691 One-Off Patch from My Oracle
Support.
• JIT Provisioning is available on 8.1.2.1.0 version and further Maintenance
Releases.
Field Description
Before you perform this operation in the database, ensure that the JIT Provisioning
Enabled check box is selected to establish a connection with the External System.
Set the JIT_IS_GRP_CRT_ENABLED Parameter Value to Y in the Configuration Table
in the database to enable the Creation of Groups during the JIT Provisioning.
Enable Group Creation The default value is N.
during JIT
After setting the value to Y, commit and restart the Servers.
Provisioning
NOTE:
NOTE: This is not a
field in the UI, it is a • JIT Provisioning is available on 8.1.1.2.0 and later versions. However, to enable it
Parameter added to in the 8.1.1.1.0 version, apply the 33067589 One-Off Patch from My Oracle
the Configuration Support.
Table in the database. • JIT Provisioning is available on 8.1.2.0.0 and later versions. However, to enable it
in the 8.1.2.0.0 version, apply the 34019691 One-Off Patch from My Oracle
Support.
• JIT Provisioning is available on 8.1.2.1.0 version and further Maintenance
Releases.
Select the check box to enable Data Redaction. For more details, see the section Data
Allow Data Redaction
Redaction in the OFS AAI Administration Guide.
This field is not applicable if you have selected the SSO Enabled check box.
Encrypt Login Select the check box to encrypt the login password for more protection.
Password NOTE: For LDAP Authentication and SMS Authorization, this check box should not
be selected.
Enable CSRF Select this check box to enable protection for the Cross Site Request Forgery (CSRF)
Protection in the application.
Select the hierarchy security node type from the drop-down list. The available
options are:
Hierarchy Security • Group-Based Hierarchy Security
Type • User-Based Hierarchy Security
Depending on the selection, the user/ group details are displayed in the Hierarchy
Security Window.
Enter the email domains that you want to allow. Enter multiple domains with
comma-separated values if you want to allow more than one domain.
Allowed Email
For example: oracle.com, oci.oracle.com
Domains
During User Creation in the User Definition (add mode) Window, you can add only
Email IDs that belong to the allowed domains.
This field is not applicable if you have selected the SSO Enabled check box.
Dormant Days Enter the number of inactive days permitted after which the user is denied accessing
the system.
Field Description
This field is not applicable if you have selected the SSO Enabled check box.
Enter the number of inactive days permitted after which the user access permissions
are removed and the delete flag status is set as “Y”.
Inactive Days
Ensure that the number of Inactive days is greater than or equal to Dormant Days.
Note that, the user details still exist in the database and can be revoked by changing
the status flag.
This field is not applicable if you have selected the SSO Enabled check box.
Working Hours Enter the working hours (From and To) to restrict the user from logging in to the
system within the specified time range. The time is accounted for in 24 hours and in
hh:mm format.
This field is not applicable if you have selected the SSO Enabled check box.
Frequency of
Password Change Enter the number of days after which the login password will expire, and the user will
be navigated directly to the Change Password Window.
This field is not applicable if you have selected the SSO Enabled check box.
Password History Enter the number of instances the old passwords need to be maintained and the user
will be restricted not to use the same password again. A maximum of the last 10
passwords can be recorded.
This field is not applicable if you have selected the SSO Enabled check box.
Select one of the following options:
Password Restriction • Restricted - To impose additional rules and parameters for users while defining
a password.
• Un Restricted - To allow users to define any password of their choice ensuring
that the password is alphanumeric without any special characters.
Enter any disclaimer information that you want to make available for the users of the
Disclaimer Text
application on the Login Window.
Field Description
Field Description
Select to enable security questions that users would have to answer before they can
reset their passwords. This feature enhances user authenticity validation. Enter
information for the following fields:
• Question 1 – Enter the first question to be displayed on the Password Reset Page.
• Answer 1 – Enter the answer to the first question.
• Question 2 - Enter the second question to be displayed on the Password Reset
Page.
• Answer 2 – Enter the answer to the second question.
• Question 3 - Enter the third question to be displayed on the Password Reset
Page.
• Answer 3 – Enter the answer to the third question.
The following illustration is an example:
Security Question
Figure 251: Security Question Enable Pane
Enable
This feature allows you to configure and maintain multiple LDAP servers in the OFSAA instance. You
can add a new LDAP server, modify/ view LDAP server details, and delete an existing LDAP server.
The LDAP Server Details window displays the details such as ROOT Context, ROOT DN, LDAP URL,
LDAP SSL Mode, and LDAP Server name.
To add a new LDAP Server
1. Select LDAP Authentication & SMS Authorization from the Authentication Type drop-down
list in the General Details tab, the LDAP Server Details window is displayed.
2. Click button in the toolbar. The LDAP Server Details window is displayed.
Table 147: Fields in the LDAP Server Details window and their Descriptions
Field Description
Enter the LDAP URL from which the system authenticates the user.
LDAP URL
For example, ldap://hostname:3060/.
Select the checkbox to enable LDAP over SSL to ensure encryption of user
LDAP SSL Mode
credentials when transferred over a network.
ROOT Password Enter the LDAP server root password for authentication.
Field Description
Enter the full path of the location of the active directory in the LDAP server
User Search Base from which to start the user search. This is a comma-delimited parameter.
For example, cn=User,dc=oracle,dc=com
Enter search filters to limit the user search for the results obtained from
User Search Filter
‘User Search Base’. For example, objectclass=organizationalPerson.
Enter a user search filter to include specific user groups. For example, enter
User Filter Classes
‘top’ for the search to access groups up to the top-level in the directory.
Specify the login ID attribute (user name) to be used in the system for users.
Login ID Attribute
For example, enter ‘cn’ to use the common name as the login id attribute.
Specify the attribute that maps to the Login ID. This is used for
Login Name Attribute
authentication purposes. For example, ‘sn’ maps to ‘cn’.
Enter the attribute that stores the user-account start-date information. For
User Start Date
example, ‘orcActiveStartdate’ contains start dates of all users.
Enter the attribute that stores the user-account end-date information. For
User End Date
example, ‘orclActiveEndDate’ contains start dates of all users.
Enter the full path of the location of the active directory in the LDAP server
Group Search Base from which to start the group search. This is a comma-delimited parameter.
For example, cn=Groups,dc=oracle,dc=com
Enter search filters to limit the group search for the results obtained from
Group Search Filter
‘Group Search Base’. For example, objectclass=groupOfNames.
Group Member
Enter a member attribute listed for the Groups. For example, ‘member’.
Attribute
Group ID Attribute Enter the attribute that identifies the group name. For example, ‘cn’.
Enter the attribute that specifies the full name of the group. For example,
Group Name Attribute
description
4. Click Save.
When a business user accesses OFSAA login window where multiple LDAP servers are
configured in the OFSAA instance, the LDAP Server drop-down list is displayed. If the user
selects an LDAP server, he will be authenticated only against the selected LDAP server. If the
user does not select any LDAP server, he will be authenticated against the appropriate LDAP
server.
NOTE SYSADMN/ SYSAUTH/ GUEST users need not select any LDAP
server as they are always authenticated against SMS store.
Additionally, if a specific user is marked as “SMS Auth Only” in
the User Maintenance window, then that user is authenticated
against the SMS store instead of the LDAP store even though
the OFSAA instance is configured for LDAP authentication. The
user has to enter password as per SMS store.
In case of any errors, the mapped users will not be able to login to the application and you may need
to correct the details by logging to the system as sysadmn.
For System Users:
• You can access OFSAAI Application using <Protocol (http/https)>://<IP/
HOSTNAME>:<SERVLET PORT>/<CONTEXT NAME>/direct_login.jsp.
• You have to select the appropriate user id from the drop-down list.
For Application Users:
• The Login Page will be their respective SSO Authentication Page.
• After successful login, you can change your locale from the Select Language link in the
application header of the Landing Page. Move the pointer over the link and select the
appropriate language from the listed languages. Based on the locales installed in the
application, languages will be displayed.
• The Change Password link will not be available in the application header.
The following table describes the fields in the Guest login tab.
Table 148: Fields in the Guest login tab and their Descriptions
Field Description
You can select the Guest Password as one of the following from the drop-
down list only if you have ENABLED guest Login:
Guest Password
Required - Guest users need to specify a password to logon.
Not Required - Guest users can logon directly.
You can specify the Guest Password only if you have selected the previous
Guest Password field option as Required.
Enter the Guest Password as indicated:
• If Password Restrictions is set in the General Details tab, the specified
Guest Password password must satisfy all the defined parameters. However Guest
Users do not comply to change password, invalid login attempts, or
logging from multiple workstations,
• If no Password Restrictions is set, ensure that the specified password is
alphanumeric without any extra spaces.
The Optimization details such as Hints, Scripts, and Using ROWID instead of Primary Keys can
be specified to optimize Merge statements. The defined configurations are also fetched as
Query Optimization Settings while defining Rule definition properties.
The following table describes the fields in the Optimization tab.
Field Description
Specify the SQL Hint that can be used to optimize Merge Query.
For example, “/*+ ALL_ROWS */”
Hint used for MERGE In a Rule Execution, Merge Query formed using definition level Merge Hint
statement precede over the Global Merge Hint Parameters defined here. In case the
definition level Merge Hint is empty / null, Global Merge Hint (if defined
here) is included in the query.
Specify the SQL Hint that can be used to optimize Merge Query by selecting
the specified query.
For example, “SELECT /*+ IS_PARALLEL */”
Hint used for SELECT
statement In a Rule Execution, Merge Query formed using definition level Select Hint
precede over the Global Select Hint Parameters defined here. In case the
definition level Select Hint is empty / null, Global Select Hint (if defined
here) is included in the query.
Field Description
You can select the ROWID checkbox to create a Merge Statement based on
specified ROWID instead of Primary Keys.
In a Rule Execution, ROWID is considered while creating Merge Statement if
User ROWID in ON clause Use ROWID checkbox is selected in either Global Parameters defined here
of MERGE statement or Rule definition properties.
If Use ROWID checkbox is not selected in either Global Parameters defined
here or Rule definition properties, then the flag is set to “N” and Primary
Keys are considered while creating in Merge Statements.
Field Description
Limit on number of Specify the number of mappings which are to be displayed in Rule
mappings displayed Definition window. A maximum of 9999 records can be displayed.
Application uses new Run Selecting this option will display only the new Run Rule Framework links
Rule Framework in Metadata Browser and Enterprise Modeling windows.
Field Description
You can select this checkbox to enable Infrastructure system to log all the
Enable audit log through usage and activity reports. A System Administrator can to generate Audit
Security Management Trail Reports in HTML format to monitor user activity on regular intervals.
System
Note: This is currently applicable for Run Rule Framework only.
Select the checkbox to allow data correction on the data source. This
enables the data correction to be executed along with data quality checks.
Allow Correction on DI If the checkbox is not selected, data corrections will be done with T2T
Source (LOAD DATA) executions, that is while loading the data to the target
table.
By default, the checkbox is selected.
11.1.12 Application
Once an application pack is installed, you can use only the Production or Sandbox information
domain, created during the installation process. Though there is an option to create a new Information
Domain, there is no menu to work with the frameworks on the newly created information domain.
This information domain then created acts only as a Sandbox Infodom.
The Create New Application feature allows you (System Administrator) to create a new Application
other than the standard OFSAA Applications and associate the standard/default platform framework
menu with it, thereby enabling the new application for usage. The standard platform framework menu
is seeded and rendered.
Click from the header to display the Administration tools in Tiles menu. Click Create New
Application from the Tiles menu to view the Create New Application window, or click button to
access the Navigation List, and click Create New Application to view the Create New Application
window.
After you create an Application, a new Role is created as <APP_CODE>ACC. This role needs to be
mapped to the user group and the users mapped to that user group will get the new Application listed
in the Tiles menu that appears on clicking from the header. Only Enabled applications are listed in
this menu.
The Create New Application window displays the existing Applications with the metadata details such
as Application ID, Application Name, Application Pack Name, Information Domain, and Enabled status.
You can make use of Search and Filter option to search for specific Application based on ID, Name,
Application Pack Name, Information Domain, and Enabled status.
1. Click from the header to display the Administration tools in Tiles menu. Click Create New
Application from the Tiles menu to view the Create New Application window, or click
button to access the Navigation List, and click Create New Application to view the Create New
Application window.
2. Click from the Applications toolbar. The Create New Application window is displayed.
Table 151: Fields in the Create New Application (add) window and Descriptions
Field Description
This field is automatically populated after you enter the Application ID. The
Application Pack Name
Application pack name will be <Application ID>PACK.
Select the Information Domain which you want to map to the Application
Information Domain from the drop-down list. The information domains to which your user
group is mapped are displayed in the list.
4. Click Save.
The new Application gets created and it appears in the Summary window. A new User Role is
created as <APP_CODE>ACC. You need to map this User Role to the required User Groups from
the User Group Role Map window. Once the System Authorizer authorizes the User Group- Role
Map, the new Application will be listed in the Select Applications drop-down from the
Applications tab for the User Group.
1. Click from the header to display the Administration tools in Tiles menu. Click Create New
Application from the Tiles menu to view the Create New Application window, or click
button to access the Navigation List, and click Create New Application to view the Create New
Application window.
2. Click from the Applications toolbar. The Create New Application (Edit) window is displayed.
3. Modify the required fields. You can edit the Application Name and Application Description.
4. Click Save.
• System Administrator
• Audit Trail Report
• User Activity Report
• User Profile Report
• Enable User
Table 152: Fields in the User Definition window and their Descriptions
Field Description
Enter a unique user id. Ensure that the User ID does not contain any
User ID
special characters or spaces except “.”, “@”, “-”, and “_”.
Enter the user name. The user name specified here will be displayed on
User Name the Infrastructure splash window. Ensure that the User Name does not
contain any special characters except “–”, “’” and “.”.
Enter the employee code. Ensure that the Employee Code does not
Employee Code contain any special characters or spaces except “.”, “@”, “-”, and “_”.
If employee code is not provided, user ID will be taken as employee code.
Enter the contact address of the user. It can be the physical location from
Address where the user is accessing the system. Ensure that Contact Address
does not contain any special characters except ".", "#", "-", ",".
Field Description
Specify the date of birth. You can use the popup calendar to enter the
Date Of Birth
date.
Enter the user designation. Ensure that Designation does not contain
Designation
any special characters except “_, “:” and "-".
Profile Name Select the profile name by clicking on the drop-down list.
Start Date By default, the Start Date is today’s date and you cannot edit this field.
By default, the End Date value is 12/31/2050 and you cannot edit this
End Date
field.
Enter the default password for the user for the initial login. User needs to
change the default password during the first login.
Password A user is denied access in case the user has forgotten the password or
enters the wrong password for the specified number of attempts (as
defined in the Configuration window). To enable access, enter a new
password here.
Select the Database Principal name from the drop-down list. The list
displays the Principal names for HDFS Kerberos connection.
Database Authentication
Principal Click to create a new Database Principal by entering the Principal
name and password in the DbAuth Principal and DbAuth String fields
respectively.
(Optional) Specify the notification start and end time within which the
Notification Time
user can be notified with alerts.
Select the checkbox if you want to enable proxy user for database
Enable Proxy
connection.
Enter the Proxy user name for the OFSAAI user, which will be used for
Proxy User name
database connection.
You can view individual user details at any given point. To view the existing function details in the User
Maintenance window:
1. Select the checkbox adjacent to the User ID.
NOTE You cannot edit the User ID. You can view the modifications
once the changes are authorized. Also a new password must be
provided during the user details modification.
You can remove the user definition(s) which are created by you and which are no longer required in
the system, by deleting from the User Maintenance window.
1. Select the checkbox adjacent to the user ID whose details are to be removed.
NOTE User can access the application until the delete request is
authorized.
This option allows you to input additional user attributes that are configured for a user. Ensure that
the required user attributes are present in the CSSMS_ATTRIB_MAST table. For more information
about how to add additional user attributes, see Setting up User Attribute Master section.
To add attributes to a user in the User Maintenance window:
1. Select the checkbox adjacent to the User ID for whom you wish to add additional attributes.
2. Click button in the User Maintenance tool bar. The User Attribute window is displayed.
The user attributes present in the CSSMS_ATTRIB_MAST table are displayed in this window.
3. Enter appropriate information or select the required value from the drop-down list, for the
displayed user attributes.
4. Click Save to upload the changes.
TYPE – Enter Type as 1 if you want to give a list of values from which the user has to select the
attribute value. In the ALLOWED_VALUES column, give the required values for the attribute.
Enter Type as 0 if the attribute value has to be entered in a text field.
4. Save the file.
5. Upload the modified CSSMS_ATTRIB_MAST table. For more information on how to upload a
table to Config Schema, see Config Schema Upload section. Note that you need to select
CSSMS_ATTRIB_MAST from the Select the table drop-down list and Upload Type as
Complete.
An appropriate message based on the success or failure status is displayed.
Table 154: Fields in the User Group Maintenance pane and their Descriptions
Field Description
Specify a unique id for the user group. Ensure that there are no special
User Group ID
characters and extra spaces in the id entered.
Precedence Enter the Precedence value. You can click button to Lookup for the
existing precedence values applied to the various user groups.
NOTE The lower the value in the precedence column, the higher is
precedence. A user may be mapped to multiple user groups
and hence the precedence value is required if Group Based
Hierarchy Security setting is selected in the Configuration
window.
3. Click Save to upload the user group details. The new User Group details need to be authorized
before associating users to the user group created. Before user group authorization, you need
to map an information domain and role to the user group.
You can view individual user group details at any given point. To view the existing user group details
in the User Group Maintenance window:
1. Select the checkbox adjacent to the User Group ID.
To update the existing user group details in the User Group Maintenance window:
1. Select the user group whose details are to be updated by clicking on the checkbox adjacent to
the User Group ID.
2. Click button in the User Group tool bar. Edit button is disabled if you have selected multiple
groups.
3. Edit the required User Group details except for User Group ID which is not editable. For more
information see Add User Group.
4. Click Save to upload changes.
You can remove user group definition(s) which are created by you, which do not have any mapped
users, and which are no longer required, by deleting from the User Group Maintenance window.
1. Select the checkbox adjacent to the user group ID(s) whose details are to be removed.
User - User Group Map window displays details such as User ID, Name, and the corresponding
Mapped Groups. You can view and modify the existing mappings within the User - User Group Map
window.
You can access User - User Group Map window by expanding User Administrator section within the
tree structure of Navigation List to the left. You can also search for specific users based on User ID and
Name.
This option allows you to view the user groups mapped to a user.
To view the mapped User Groups of a user
• From the User-User Group Map window, select the checkbox adjacent to the User ID. The list of
user group(s) to which the selected user has been mapped is displayed under Mapped Groups
grid.
To map a user group, select the User Group and click . You can press Ctrl key for
multiple selections.
Profile Maintenance facilitates you to create profiles, specify the time zones, specify the working days
of the week and map holiday’s schedule. Profile Maintenance window displays the existing profiles
with details such as the Profile Code, Profile Name, Time Zone, Workdays of Week, Holiday Time
Zone, and mapped Holidays. In the Profile Maintenance window you can add, view, edit, and delete
user profile definitions.
You can access Profile Maintenance by expanding User Administrator section within the tree
structure of Navigation List to the left. You can also search for specific profile or view the list of
existing profiles within the system.
2. The Profile Definition (add) window is displayed. Enter the details as tabulated.
Table 155: Fields in the Profile Definition (add) and their Descriptions
Field Description
Enter a unique profile code based on the functions that the user
Profile Code executes. For example, specify AUTH if you are creating an authorizer
profile.
Enter a unique profile name. Ensure that Profile Name does not contain
Profile Name
any special characters except ".", "(",")", "_", "-".
Select the Start and End time zone from the drop-down list. Time zones
Time Zone are hourly based and indicate the time at which the user can access the
system.
Select the Holiday Start and End time zone from the drop-down list.
Holiday Time Zone Time zones are hourly based and indicate the time at which the user can
access the system on holidays.
1. Click button in the New Holidays grid. Holiday Mapping window is displayed.
The Holiday Mapping window displays the holidays that are added through the Holiday
Maintenance section.
2. To map a holiday, you can do the following:
To map holiday to the user profile, select from the list and click .
To remove holiday mapping to user profile, select from the list and click .
2. Select the user group and the folder. The Mapped/Unmapped Roles corresponding to the
selected User Group which requires authorization are displayed in the respective grids.
3. Select the checkbox adjacent to the mapped or unmapped roles and
Click (authorize) button to authorize it.
You can search for specific user group based on User Group ID, Group Name, and Description.
To map a user group to a domain, do the following:
1. Select the checkbox adjacent to the required User Group ID. The User Group Domain Map
window is refreshed to display the existing mapped domains.
2. Click button in the Mapped Domains section tool bar. The User Group Domain Map window
is displayed.
To map Domains to a User Group, select the Domain from the Members list and click .
You can press Ctrl key for multiple selections.
To remove mapping for a user group, select the Domain from Select Members list and click
.
Table 156: Group Code and Role Code used for the User Group Role map
ADMIN SYSADM
AUTH SYSATH
CWSADM CWSADMIN
You can access User Group Role Map window by expanding User Administrator section within the
tree structure of Navigation List to the left.
The User Group Role Map window displays a list of available user groups in alphabetical order with the
User Group ID and Description. On selecting a user group, the list of available mapped roles are
displayed.
You can also search for specific user group or view the list of existing user groups within the system.
To map a Role to User Group, do the following:
1. Select the checkbox adjacent to the required User Group ID. The User Group Role Map window
is refreshed to display the existing mapped roles.
2. Click button in the Mapped Roles section tool bar. The User Group Role Map window is
displayed.
3. In the User Group Role Map window, you can search for a Role using the Search field and edit
the mapping.
To map Role to a User Group, select the Role from the Members list and click . You can
press Ctrl key for multiple selections.
To remove mapping for a user group, select the Role from Select Members list and click .
2. Select the user group from the User Group Folder Role Map grid.
All shared folders are displayed in the Infodom-Folder Map grid.
3. Select the shared folder to which you want to map roles and click .
4. Select the required roles and click or click to map all the roles. To remove mapping of a
role, select the role and click . To remove all mapped roles, click .
5. Click Ok.
User Group-Folder-Role mapping/unmapping should be authorized by the System Authorizer.
If you have enabled auto authorization, then the mapping/unmapping gets authorized
automatically. To enable auto authorization, see the SMS Auto Authorization section.
2. Enter the function details as tabulated. You can also see pre-defined Function Codes for
reference.
The following table describes the fields in the Function Definition (add) window.
Table 157: Fields in the Function Definition (add) window and their Descriptions
Field Description
Enter a unique function code. Ensure that there are no special characters
Function Code
and extra spaces in the code entered. For example, DATADD to add dataset.
Enter a unique name for the function. Ensure that the Function Name does
Function Name
not contain any special characters except “(“, “)”, “_”, “-“, “.”
Field Description
Enter the function description. Ensure that the Function Description does
Function Description
not contain any special characters except “(“, “)”, “_”, “-“, “.”
You can view individual function details at any given point. To view the existing user details in the
Function Maintenance window:
1. Select the checkbox adjacent to the Function Code.
To update the existing function details (other than system generated functions) in the Function
Maintenance window:
1. Select the checkbox adjacent to the required Function Code.
2. Click button in the Function Maintenance tool bar. The Edit Function Details window is
displayed.
3. Update the required information. For more details, see Create Function.
You can remove only those function(s) created by you and which are no longer required in the system,
by deleting from the Function Maintenance window.
1. Select the checkbox adjacent to the Function Code whose details are to be removed.
2. Enter the role details as tabulated. You can also see pre-defined Codes for reference.
The following table describes the fields in the Role Definition (add) window
Table 158: Fields in the Role Definition (add) window and their Descriptions
Field Description
Enter a unique role code. Ensure that there are no special characters and
Role Code extra spaces in the code entered. For example, ACTASR to create Action
Assessor.
Enter a unique name for the role. Ensure that the Role Name does not
Role Name
contain any special characters except space.
Field Description
Enter the role description. Ensure that the Role Description does not
Role Description
contain any special characters except space.
3. Click Save to upload the role details. The User Info grid at the bottom of Role Maintenance
window display metadata information about the role created.
You can view individual role details at any given point. To view the existing role details in the Role
Maintenance window:
1. Select the checkbox adjacent to the Role Code.
2. Click button in the Role Maintenance tool bar. The Edit Role Details window is displayed.
3. Update the required information. For more details, see Create Role.
You can remove only those role(s) which are created by you, which does not have any users mapped,
and which are no longer required in the system by deleting from the Role Maintenance window.
1. Select the checkbox adjacent to the Role Code whose details are to be removed.
You can access Function – Role Map by expanding System Administrator section within the tree
structure of Navigation List to the left. The Function – Role Map window displays a list of available Role
Codes in alphabetical order with the Role Name. On selecting a particular Role Code, the Mapped
Functions are listed in the Mapped Functions grid of Function – Role Map window.
You can also make use of Search and Pagination options to search for a specific role or view the list of
existing roles within the system.
To view the default Function – Role mapping defined within the Infrastructure application, see
Function Role Mapping.
To map a role to a function in the Function – Role Map window, do the following:
1. Select the checkbox adjacent to the required Role Code. The Function – Role Map window is
refreshed to display the existing mapped functions.
2. Click button in the Mapped Functions section tool bar. The Function Role Mapping window
is displayed.
3. In the Function Role Mapping window, you can search for a function using the Search field and
edit the mapping.
To map a function to a role, select the function from the Members list and click . You can
press Ctrl key for multiple selections.
Table 159: Fields in the Segment Maintenance (add) window and their Descriptions
Field Description
Select the required domain for which you are creating a segment, from the
Domain
drop-down list.
Enter a unique segment code. Ensure that the segment code does not
Segment Code exceed more than 10 characters and there are no special characters except
underscore or extra spaces.
Enter a unique name for the segment. Ensure that there are no special
Segment Name
characters in the name entered.
Enter the segment description. Ensure that there are no special characters
Segment Description
in the description entered except spaces, “(“, “)”, “_”, “-“, and “.”.
Select the type of the segment/folder from the drop-down list. The options
Segment/Folder Type
are Public, Private, and Shared.
Owner Code Select the owner code from the drop-down list.
You can view individual segment information at any given point. To view the existing segment details
in the Segment Maintenance window:
1. Select the checkbox adjacent to the required segment.
2. Click button in the Segment Maintenance tool bar. The Edit Segment Details window is
displayed.
3. Update the Segment Description, Segment/Folder Type, and Owner Code. The others fields are
view only and are not editable. For more details, see Create Segment.
4. Click Save to upload the changes.
You can remove only those segment(s) which are created by you, which does not have any users
mapped, and which are no longer required in the system by deleting from the Segment Maintenance
window.
1. Select the checkbox adjacent to the segment whose details are to be removed.
Holiday Maintenance facilitates you to create and maintain a schedule of holidays or non-working
days within the Infrastructure system. On a holiday, you can provide access to the required users and
restrict all others from accessing the system from the User Maintenance window.
You can access Holiday Maintenance by expanding System Administrator section within the tree
structure of Navigation List to the left. The Holiday Maintenance window displays a list of holidays in
ascending order. In the Holiday Maintenance window you can create and delete holidays.
You can remove a holiday entry by deleting from the Holiday Maintenance window.
1. Select the checkbox adjacent to the holiday which has to be removed.
Restricted Passwords facilitates you to add and store a list of passwords using which users are not
permitted to access the Infrastructure system.
You can access Restricted Passwords by expanding System Administrator section within the tree
structure of Navigation List to the left. The Restricted Passwords window displays a list of restricted
passwords and allows you to add and delete passwords from the list.
You can also make use of Search and Pagination options to search for a specific password or view the
list of existing passwords within the system.
2. Enter the password in the New – Password field. Ensure that the password is alphanumeric,
without any spaces, and the length should be between 6 and 20.characters.
3. Click Save to upload new password.
You can de-restrict a password by deleting from the Restrict Passwords window.
1. Select the checkbox adjacent to the password which has to be removed.
Table 160: Report Types in the User Activity Report window and their Descriptions
This window displays the list of current users accessing the Infrastructure
Currently logged in users system with details such as; User ID, User Name, and Last Login Date
information.
This window displays the list of users who are authorized but are currently
Disabled Users disabled to access the Infrastructure system with their details such as; User ID,
User Name, and Disabled On date.
This window displays the list of users who are removed from the system with
the status as authorized to access the Infrastructure system. The list also
Deleted Users
displays the details such as; User ID, User Name, Last Login, Authorization
Status, and the Deleted On date.
This window displays the User ID, and User Name of all the users which are
Unauthorized Users
not authorized.
This window displays the list of users who have not logged in to the
Infrastructure system for a certain period, with details such as; User ID and
User Name.
Idle Users
The default number of idle days accounted is 10 and the value can be modified
by entering the required number of days in the Idle Users (No of Days) field
located in Search and Filter grid.
This window displays all OFSAA Roles and the corresponding Functions/
Role Master Report rights mapped to the role. That is, if a Function/Right is assigned to a
particular role, then the corresponding check box will be in selected state.
To generate this report, enter the User ID of the user whose report you want to
generate and click . The report displays various user details such as User
User ID Population Report ID, User Name, Employee Code, Profiles, Status of the Profiles, Creation Date,
Last Password Changed Date, Last log in Date, Maker ID, Maker Date, Checker
ID, Checker Date, and Profile End Date.
To generate this report, enter the User ID of the user whose report you want to
generate and the duration and then click . The report displays the new
and old values for User ID, User Name, Employee Code, Profile Name, Activity,
UAM Admin Activity Report Maker ID, Checker ID, Marker Date, and Checker Date. It also displays the list
of Admin activities performed on the User within the specified duration such
as User Details modified, User Access rights modified, User Mappings
modified, and so on.
For User Activity Reports such as Currently logged in users, Disabled users, Deleted users,
Unauthorized users, and Idle users, you can:
• Click Save to File to generate a HTML format of the report.
The File Download window is displayed.
Click Open in the File Download window to view the report in your browser.
Click Save in the File Download window to save a local copy of the report.
For User Activity Reports such as Role Master Report, User ID Population Report and UAM Admin
Activity Report, you can:
Select the user names from the Members list and click . You can press Ctrl key for
multiple selections.
To remove a selected user, select the user from Select Members pane and click .
To remove all the selected users from Select Members pane, click .
3. Click OK to save the mappings and return to User Profile Report window.
4. Select Generate Reports in the User Profile Report window and view the report.
NOTE You can select File as the print option, to generate a HTML
report. The access link to the report is displayed at the bottom
of User Profile Report window.
You can also select Reset to refresh the selections in the User Profile Report window.
You can also use search to filter the list and find the required ID. Click Search and enter the
keyword in Search For field. Click OK, the list is sorted based on the specified keyword.
2. Enable access to the selected user on any or both the conditions:
Select Enable Login checkbox, if the user access is denied due to invalid login attempts.
Select Clear Station checkbox, if the user access is denied due to an abnormal exit from the
system.
11.3 References
This section of the document consists of information related to intermediate actions that needs to be
performed while completing a task. The procedures are common to all the sections and are referenced
where ever required. You can see the following sections based on your need.
On selecting this checkbox in Others tab of System Configuration > Configuration window, an insert
query is generated and executed just before the merge statement of the rule is executed. This in turn
lists the number of records processed by all mappings and also stores information about Run ID, Rule
ID, Task ID, Run Skey, MIS Date, number of records fetched by each mapping, order of evaluation of
each mapping, and so on, in configuration table (EXE_STAT).
Typically, the insert query lists the number of records processed by each condition in the rule and is
done just before the task gets executed and not after the batch execution is completed (since the state
of source data might change). This insert query works on all types of query formation including
Computation Rules with and without Aggregation, Classification Rules, Rules with multiple targets,
Rules with default nodes, Rules with Parameters in BPs, and Rules with exclusions.
11.3.3.1 Scenario
Consider the following scenario where, a typical rule would contain a series of Hierarchy Nodes
(BI/Non BI) as Source and one or more BPs or BI Hierarchy Leaf Nodes in the Target.
Rule 1 consists of the following:
SOURCE TARGET
Condition 1 Target 1
Condition 2 Target 1
Condition 3 Target 1
Condition 4 Target 2
The insert query execution populates execution statistics based on the following:
• Each rule has processed at least one record.
• Each target in the rule has processed at least one record through Condition 1 / Condition 2 /
Condition 3 and Condition 4.
• Each source in the rule has processed at least one record through Condition 1 / Condition 2 /
Condition 3 and Condition 4.
12 Reports
Reports for user status, user activity, audit trail and so on is available to users and supports export of
the data generated in PDF and MS Excel formats.
The following user reports are available in the application:
• User Status Report
• User Attribute Report
• User Admin Activity Report
• User Access Report
• Audit Trail Report
You can access Audit Trail Report from Reports on the header. Click from the header to
display the Reports in Tiles menu.
2. Click any of the reports to display the respective Search and Filter windows.
NOTE You can access reports from the Tiles menu, or by clicking the
button to view the Navigation List.
Table 162: Fields in the User Status Report Window and their Descriptions
Field Description
Click the User Name field to display a drop-down list of User Names.
User Name Select All to display the report for all users in the system or select a
specific User Name to display the report for the selected User Name.
Note: You can select either the User ID or User Name Field. You cannot
use a combination of both fields to generate the report.
Disabled Users Select the check box to filter the report for Users disabled in the system.
Deleted Users Select the check box to filter for Users deleted in the system.
Select the check box to filter for users who are currently logged in to the
Currently Logged in Users
system.
Note: You can use a combination of the Disabled Users and Deleted
Users check boxes to filter your reports. Selecting Disabled Users or
Deleted Users disables the Currently Logged in Users check box.
Conversely, selecting Currently Logged in Users disables the Disabled
Users and Deleted Users check boxes.
3. Click Search to generate the report and display the result in the section following the Search
and Filter Section, or click Reset to clear all values from the Search and Filter Section and
enter new criteria to search.
The following table describes the columns in the report:
Table 163: Fields in the Search and Filter Pane and their Descriptions
Field Description
Last Successful Login Displays the Date and Time of the last successful login by the User.
Last Failed Login Displays the Date and Time of the last failed login by the User.
Field Description
Displays whether the User has been authorized in the system or not.
The values are:
YES
Authorized
NO
Note: The authorization of Users is done by Administrators who have
User Authorization Privileges.
Displays the number of days that the User has remained idle in the
Idle Days
system.
Note: You must apply the 33150367 One-off Patch from My Oracle
Support to view the following additional fields: Start Date, End Date, Login
Holidays, SMS Auth Only, Created Date, Last Modified Date, Last
Password, Change Date, Last Enabled Date, Last Disabled Date, and
Deleted Date.
Displays the Start Date configured of the period for the User to be active
Start Date
in the system.
Displays the End Date configured of the period for the User to be active in
End Date
the system.
Created Date Displays the date on which the User was created in the system.
Displays the date on which the details of the User were last updated in the
Last Modified Date
system.
Last Password Change Displays the date when the password was changed the last time around
Date for the User.
Last Enabled Date Displays the date when the User was last enabled in the system.
Last Disabled Date Displays the date when the User was last disabled in the system.
Deleted Date Displays the date when the User was deleted from the system.
Additional generic features available on the User Status Report Window is as follows:
To export the report, click the Export Button and select either PDF or Excel.
Select the Up or Down Icons from the header for a required column to sort the records in
ascending or descending order.
For more information, see Resizing and Sorting Reports.
Enter a number in the Go to Page Field in the footer to navigate to a specific record page. Or
use the First, Previous, Next, and Last Buttons to navigate between the list of records
displayed across multiple pages.
However, after you use the navigation features in the footer, the sorting feature in the header
does not apply.
The default value for this field is 5 records per page.
Table 164: Fields in the User Attribute window and their Descriptions
Field Description
Click the User ID field to display a drop-down list of User IDs. Select All to
User ID display the report for all users in the system, or select a specific User ID to
display the report for the selected User ID.
Click the User Name field to display a drop-down list of User Names. Select
User Name All to display the report for all users in the system, or select a specific User
name to display the report for the selected User Name.
Note: You can select either User ID, or User Name. You cannot use a combination of both fields to
generate the report.
3. Click Search to generate the report and display the result in the section following the Search
and Filter section, or click Reset to clear all values from the Search and Filter section and enter
new criteria to search.
The following table provides description for the columns in the report.
Field Description
4. To export the report, click the button and select either PDF, or Excel.
Figure 277: Fields in the User Admin Activity Report window and their Descriptions
Field Description
Click the User ID field to display a drop-down list of User IDs. Select All to
User ID display the report for all users in the system, or select a specific User ID to
display the report for the selected User ID.
Click the User Name field to display a drop-down list of User Names. Select
User Name All to display the report for all users in the system, or select a specific User
name to display the report for the selected User Name.
Note: You can select either User ID, or User Name. You cannot use a combination of both fields to
generate the report.
From Date Select the start date for the report from the Date editor.
To Date Select the end date for the report from the Date editor.
3. Click Search to generate the report and display the result in the section following the Search
and Filter section, or click Reset to clear all values from the Search and Filter section and enter
new criteria to search.
The following table provides description for the columns in the report:
Field Description
Profile Name Displays the name of the profile for the user.
Activity Displays the type of activity performed on the user by the administrator.
Displays the User ID of the administrator performing the activity for the
Maker ID
user.
Checker ID Displays the User ID of the administrator performing the checker activity.
Maker Date Displays the date and time of performing the activity by the maker.
4. To export the report, click the button and select either PDF, or Excel.
Figure 279: Fields in the User Access Reports Window and their Descriptions
Field Description
Field Description
Click the User Name Field to display a drop-down list of User Names.
User Name Select All to display the report for all Users in the system or select a specific
User Name to display the report for the selected User Name.
Note: You can select either User ID or User Name. You cannot use a
combination of both fields to generate the report.
Note: You must apply the 33150367 One-off Patch from My Oracle
Support to view the following check boxes in the search criteria: Group,
Role, and Function.
Select the check box to apply the Group Filter to the report.
Group
All the Groups mapped to the selected user are displayed.
Select the check box to apply the Role Filter to the report.
Role
All the Groups and Roles mapped to the selected user are displayed.
Select the check box to apply the Function Filter to the report.
Function All the Groups, Roles, and Functions mapped to the selected user are
displayed.
Note: You can select the Group, Role, and Function check boxes and filter
your reports. Selecting any of the check boxes disables selection for the
remaining check boxes.
For example, selecting Group disables the Role and Function check boxes.
3. Click Search to generate the report and display the result in the section following the Search
and Filter Section or click Reset to clear all values from the Search and Filter Section and enter
new criteria to search.
The following table describes the columns in the report:
Field Description
Group Name Displays the Group Name that the User is mapped to.
Additional generic features available on the User Access Report Window is as follows:
To export the report, click the Export Button and select either PDF or Excel.
Select the Up or Down Icons from the header for a required column to sort the records in
ascending or descending order.
For more information, see Resizing and Sorting Reports.
Enter a number in the Go to Page Field or select a Page from the displayed numbers in the
footer to navigate to a specific record page. Or use the First, Previous, Next, and Last Buttons
to navigate between the list of records displayed across multiple pages.
However, after you use the navigation features in the footer, the sorting feature in the header
does not apply.
The default value for this field is 5 records per page.
Table 167: Fields in the Audit Trial window and their Descriptions
Field Description
Click the User Name field to display a drop-down list of User Names. Select
User Name All to display the report for all users in the system, or select a specific User
name to display the report for the selected User Name.
From Date Select the start date for the report from the Date editor.
To Date Select the end date for the report from the Date editor.
Enter a few characters to search for a user name and select the required
Action Detail
name.
3. Click Search to generate the report and display the result in the section following the Search
and Filter section, or click Reset to clear all values from the Search and Filter section and enter
new criteria to search.
The following table provides description for the columns in the report:
Field Description
Status Displays the status of the action. The values are successful or failure.
Operation Time Displays the date and time for the action performed.
Displays the IP address of the machine from which the action was
Workstation
performed.
4. To export the report, click the button and select either PDF, or Excel.
4. Select and click Resize to view the options for Resize. Select Resize Width.
5. Similarly, to Sort Columns, right-click to view the Resize and Sort Column option.
6. Select and click Sort Columns to view the options: Sort Column Ascending and Sort Column
Descending. Select the required sorting system.
7. You can also sort the columns in ascending or descending order by clicking on the column
headers.
13 Object Administration
Object Administration is an integral part of the Infrastructure system and facilitates system
administrators to define the security framework with the capacity to restrict access to the data and
metadata in the warehouse, based on a flexible, fine-grained access control mechanism. These
activities are mainly done at the initial stage and then on need basis.
The document deals with the information related to the workflow of Infrastructure Administration
process with related procedures to assist, configure, and manage the administrative tasks effectively.
You (System Administrator/System Authorizer) need to have SYSATH, SYSADM, and METAAUTH
function roles mapped to access the Object Administration framework within the Infrastructure
system.
Object Administration consists of the following sections. Click the links to view the sections in detail.
• Object Security
• Object Migration
• Translation Tools
• Utilities
Alternatively, the Information Domain drop-down list is also available at the top of the Navigation List.
Click on the Hamburger icon to access the Navigation List. The following illustration shows the
Information Domain drop-down on the Navigation List:
• Objects contained in a Public folder will be displayed in Summary window and in object selection
lists to all users, irrespective of user group mapping. No mapping is required.
• Objects contained in a Shared folder will be displayed in Summary window and in object
selection lists, to users belonging to the user groups, which are mapped to the corresponding
folder. The mapping is done from the User Group Folder Role Map window.
• Objects contained in a Private folder will be displayed only to the associated owner (an
individual user).
Consumption within Higher Objects
• A user can consume objects associated to Public Folders in another higher object provided the
Read Only role is mapped to the user group in that folder. This mapping is done through User
Group Role Map window. For objects in shared folders also, the Read Only role should be
mapped. This mapping is done through the User Group Folder Role Map window.
For example, consider a Run definition in which a Classification Rule is used. Suppose the
classification rule, say X is created in a Public folder called Y and the user belongs to user group
UG. Then for the user to use X rule in the Run definition, the user group UG should have
mapped to the “Rule Read Only” role. But if X rule is created in a Shared folder Z, the user group
UG should have mapped to the folder Z and to the “Rule Read Only” role.
Folder Selector Behavior
The folders displayed in the Folder Selector window launched from the Object definition window are:
• All Public and Shared folders which are mapped to the user group and on which the user group
has Write role. Mappings should be done for Public folders through the User Group Role Map
window and User Group Domain Map window. Mappings should be done for Shared folders
through User Group Folder Role Map window.
• All Private folders for which you are the owner.
Advanced
Write
Authorize
Advanced
Phantom
Summary
View
Trace
Compare
Publish
Edit
Copy
Remove
MAKE_LATEST
Export
Archive
Restore
Advanced
For Administrative type of roles, additional roles are seeded from Security Management Systems
(SMS) module.
APPROVE Privilege to authorize an object by approving the same after any action has been
performed.
REJECT Privilege to authorize an object by rejecting the same after any action has been
performed.
LATEST Privilege to make any authorized version definition of the definition latest.
You (System Administrator) should have SYSADM function role mapped to your user role to access
Metadata Segment Mapping window. By default this window displays the Information Domain Name
to which you are connected along with the metadata details of Measure.
To map a metadata, select the metadata from the Available Metadata pane and click .
The metadata is added to the Selected Metadata pane. You can press Ctrl key for multiple
selections.
click .
Information Domain from the drop-down list, click Object Administration and select Batch
Execution Rights.
The User Group-Batch Execution Map window displays the list of defined Batches for the selected
Information Domain along with the other details such as Batch Name and Batch Description. You can
filter the list of defined batches that are created in Batch Maintenance, Enterprise Modeling, or in
Rules Run Framework. By default the list displays the batches defined in the Batch Maintenance
window.
To map User Group to the required Batch in the User Group-Batch Execution Map window:
1. Select the Information Domain from the drop-down list. By default, the window displays the
Information Domain to which you are connected.
2. Select the User Group to which you want to map the Batches, from the drop-down list.
The list consists of all the User Groups mapped to the selected Information Domain. The
window is refreshed and the list of defined batches is populated.
You can also search for a specific user group by clicking Search and specifying the User Group
Name in the Search for Group window. Click OK.
3. Select Batch Maintenance (default), Enterprise Modeling, or Run Rules Framework and filter
the list of batches. You can also select ALL to list all the defined batches for the selected
Information Domain.
4. Map User Group to Batch(s) by doing the following:
To map batch(s) to the selected User Group, select Batch Map checkbox.
To map all the batches to the selected User Group, click CheckAll.
You can restore the previous state of objects, you can use export the objects as snapshots and
use it, which would export the state of the objects at that time of snaposhot creation for your
migration. For more information, see Generating a Snapshot.
13.5.1.1 Prerequisites
• Folders (segments) and user groups that are designated for the import should be present in
the target.
• The source and target environment should have the same installed languages. OFSAA
supports 18 languages in total. In case of a particular language during export operations, only
the specified objects having the same language is exported.
• OFSAA users should have access to folders (Infodom segment mapping) in target as well as
source. This access is required to get the objects in its state as available in the source, to
perform actions such as view and edit.
• Tables accessible to users in source should also exist in target.
For example, if you want to migrate a Data Element Filter based on "Table A" and "Table B" in
the source, those two tables should exist in the target.
NOTE Before you migrate F2T, migrate the respective Data Source
files to the Target Environment or create them in the Target
Environment.
NOTE If you have used the Master Table approach for loading
Dimension data and set it up to generate surrogate keys for
Members, this results in different IDs between the source and
target, so it may cause errors if you have objects which depend
on these IDs
All objects that generate new ID after migrating to a different
information domain and all components which are registered
through the Component Registration window, which will be used
in the Run Rule Framework (RRF), must be manually entered in
AAI_OBJ_REF_UPDATE table in the Configuration Schema. The
attributes present in the table are:
• V_OBJECT_TYPE- EPM Object Type
• V_RRF_OBJECT_TYPE- RRF object Type. The ID can
be referred from pr2_component_master table.
• V_ICC_OBJECT_TYPE- ICC object type, can be referred
from component_master table.
• F_IS_FILTER- Is the object to be migrated as a
filter/not?
• N_BATCH_PARAMETER_ORDER- The order of
parameter in task (if used in a batch).
The Object Migration Export Summary window displays the list of pre-defined Export Definitions with
their Outline Id and Dump Name. By clicking the Column header names, you can sort the column
names in ascending or descending order. You can add, view, edit, copy, export, delete, and generate
snapshot for the Export Definition. You can search for a specific Export Definition based on the
Outline Id or Dump Name.
Export definition can be created can be performed based on the following approach:
• Creating Outline Definition
• Creating Snapshot Definition
You can view individual Export definition details at any given point.
To view an existing Export definition, perform the following steps:
1. From the Object Migration Export Summary window, click the Menu Button and select View.
The Export Objects window is displayed. The Export Objects window displays the details of the
selected Export definition like Outline Id, Dump Name and the objects selected for exporting.
You can update the existing Export definition details except the Outline Id.
You can add more objects for exporting or removing the existing objects.
To modify the Export definition, perform the following steps:
1. From the Object Migration Export Summary window, click the Menu Button and Edit. The
Export Objects window is displayed.
2. Update the required details. For more information, see Creating Export Definition. You can also
exclude or include dependencies which are previously excluded to your export definition. For
more information, see Viewing and Excluding Dependencies in Outline Definition
3. Click Save and update the changes.
This option allows you to quickly create a new Export definition based on an existing Export definition.
You need to provide a new Outline Id and can modify other required details.
To copy an existing Export definition, perform the following steps:
1. From the Object Migration Export Summary window click the Menu Button and select Copy.
The Export Objects window is displayed.
2. Enter a unique Outline Id to identify the status of the migration process.
3. Update other details if required.
For more information, see Creating Export Definition.
4. Click Save.
1. From the Object Migration Export Summary window, select the Export definition that you want
to delete and click Delete.
A confirmation message is displayed.
2. Click Yes. The definition gets deleted.
Exporting Objects allows you to export a set of objects to migrate across Information Domains within
the same setup or across different setups.
To create an export outline definition, perform the following steps:
1. Click the Add button in the Object Migration Export Summary window.
The Outline Definition window is displayed.
The select outlines are displayed in hierarchy in the Outline pane. In this example, Object Types
are selected as Aliases and Business Hierarchies.
6. Click Save.
The Export definition is available in the Object Migration Export Summary window.
7. Click the Menu Button and select Export to execute.
8. A confirmation message is displayed. Click Ok to trigger the export process.
The dump file will be created in /ftpshare/ObjectMigration/metadata/archive folder.
You can view the logs from /ftpshare/ObjectMigration/logs/ folder.
When creating an outline, you can specify its dependencies by using the dependency option. This
enables you to review the dependent objects with the outline.
To view the dependency, perform the following steps:
1. Click the Add button in the Object Migration Export Summary window.
The Outline Definition window is displayed.
You can exclude or include self and child objects from this window.
7. Right click the object and choose whether to include or exclude from the following options:
Exclude Children
Include Self and Children
Exclude Self and Children
8. Based on your selecting the objects are excluded or included. When migrated the rule, the
excluded or included objects for the execution.
9. Click Object Selection link to go back to the previous window.
10. Select Save to save the outline.
11. Click the Menu Button and select Export to trigger the export.
After exporting the objects you can perform the following tasks:
Viewing an Export Definition
Editing an Export Definition
Generating a snapshot enables to saves the details of the object at a given point and you can also
restore the snapshot whenever required.
To generate a snapshot, perform the following steps:
1. Click the Add button in the Object Migration Export Summary window. The Outline Definition
window is displayed.
To create snapshot for the export definition, perform the following steps:
1. From the Object Migration Export Summary window, select the Export definition that you want
to create snapshot and click Generate Snapshot.
The saved snapshot is be available in the Object Migration Export Summary window.
3. Click the Menu Button. You can select Export to export the snapshot or View Log to see the
latest changes.
The saved snapshot is be available in the Object Migration Export Summary window. You can click the
Menu Button and select Export to export the snapshot. The Archive is created in FTPSHARE and you
can view the log files for details.
EXAMPLE PATH: /scratch/ofsaa/ftpshare/ObjectMigration/metadata/archive.
The Object Migration Import Summary window displays the list of pre-defined Import Definitions with
their Outline Id and Dump Name. By clicking the Column header names, you can sort the column
names in ascending or descending order. You can add, view, edit, copy, and delete Import Definition.
You can search for a specific Import Definition based on the Outline Id and Dump Name.
2. Select the dump file from the drop-down list. It displays the dump files in the
/ftpshare/ObjectMigration/metadata/restore folder. The objects in the dump file will
be displayed in the Available Objects pane.
3. Select the required Folder from the drop-down list. This is the default target folder if object
specific Folder is not provided. However, if both Folders are not specified, then source folder
available in the exported dump file will be considered as target folder.
4. Select the Retain Ids as Yes or No from drop down to retain the source AMHM objects after
migration.
If it is turned ON, different scenarios and the behaviors are as follows:
Object and ID does not exist in Target- the object is created in target environment with
same ID as that in source.
Object exists in Target with different ID- object is migrated and the ID in the target is
retained.
ID already exists in Target with different object- then the object is migrated to target
environment and a new ID is generated.
Same object and ID exists in Target- In this case, the behavior depends on the OVERWRITE
flag.
5. Turn ON the Fail On Error toggle button to stop the import process if there is any error. If it is
set OFF, the import process will continue with the next object even if there is an error.
6. Turn ON the Import All toggle button to import all objects in the dump file to the target
environment.
7. Turn ON the Overwrite toggle button to overwrite any existing metadata. If it is turned OFF, it
will not overwrite the object and continue migrating the next object.
8. Click Save.
The Import definition will be available in the Object Migration Import Summary window.
9. Select the definition and click Import to execute.
10. A confirmation message is displayed. Click Ok to trigger the import process.
You can view the logs from /ftpshare/ObjectMigration/logs folder.
6. Turn ON the Fail On Error toggle button to stop the import process if there is any error. If it is
set OFF, the import process will continue with the next object even if there is an error.
7. Turn ON the Import All toggle button to import all objects in the dump file to the target
environment.
8. Turn ON the Overwrite toggle button to overwrite any existing metadata.
If it is turned OFF, it will not overwrite the object and continue migrating the next object.
9. Click Save.
The Import definition will be available in the Object Migration Import Summary window.
10. Select the definition and click Import to execute.
11. A confirmation message is displayed. Click Ok to trigger the import process.
You can view the logs from /ftpshare/ObjectMigration/logs folder.
You can view individual Import definition details at any given point.
To view an existing Import definition, perform the following steps:
1. From the Object Migration Import Summary window, click the Menu Button and select View.
The Import Objects window is displayed.
2. The Import Objects window displays the details of the selected Import definition like Outline ID,
Dump Name and the objects selected for importing.
You can update the existing Import definition details except the Outline ID.
You can add more objects for importing or removing the existing objects.
To modify the Import definition, perform the following steps:
1. From the Object Migration Import Summary window, click the Menu Button and select Edit.
The Import Objects window is displayed.
2. Update the required details.
For more information, see Creating Import Definition.
3. Click Save and update the changes.
This option allows you to quickly create a new Import definition based on an existing Import definition.
You need to provide a new Outline Id and can modify other required details.
To copy an existing Import definition, perform the following steps:
1. From the Object Migration Import Summary window, click the Menu Button and select Copy.
The Import Objects window is displayed.
2. Enter a unique Outline ID to identify the status of the migration process.
3. Update other details if required. For more information, see Creating Import Definition.
4. Click Save.
The following table lists the objects that are supported for implicit dependency and the dependent
objects:
DATA TRANSFORMATION NA
ALIAS NA
DATASET
BUSINESS MEASURE
DERIVED ENTITY
BUSINESS HIERARCHY
BUSINESS PROCESSOR
ALIAS
BUSINESS MEASURE
DERIVED ENTITY
DERIVED ENTITY
BUSINESS HIERARCHY
BUSINESS MEASURE
ALIAS
DATASET
DERIVED ENTITY
DATASET
BUSINESS PROCESSOR
DATASET
BUSINESS DIMENSION
ORACLE CUBE NA
MAPPER Hierarchies
FORMS TAB NA
DATASET
MEASURE
HIERARCHY
BUSINESS PROCESSOR
RULE
DATA ELEMENT FILTER
GROUP FILTER
ATTRIBUTE FILTER
HIERARCHY FILTER
EXTRACT DATA
LOAD DATA
TRANFORM DATA
RULE
PROCESS PROCESS
CUBE
VARIABLE SHOCK
MODEL
LOAD DATA
TRANFORM DATA
RULE
PROCESS
RUN
CUBE
VARIABLE SHOCK
MODEL
GROUP FILTER
ATTRIBUTE FILTER
HIERARCHY FILTER
MEMBERS
DIMENSION
ATTRIBUTES
BUSINESS HIERARCHY
FILTER ATTRIBUTES
FILTER
EXPRESSION EXPRESSION
SANDBOX 2 NA
BUSINESS HIERARCHY
BUSINESS MEASURE
VARIABLE
BUSINESS PROCESSOR
DATASET
TECHNIQUE NA
VARIABLE
BUSINESS HIERARCHY
TECHNIQUE
MODEL
VARIABLE
DATASET
BUSINESS HIERARCHY
DataElement Filter
RUN
STRESS
SCENARIO
CATALOG PUBLISH NA
USER PROFILE
ROLE FUNCTION
FUNCTION NA
PROFILE NA
PMF PROCESS NA
The following table describes the Object Name and Object SubType ID.
DataElement Filter 4
Hierarchy Filter 8
Group Filter 21
Attribute Filter 25
• Financial Services Applications infrastructure objects such as Dimension, Hierarchy, Filter, and
Expression Rule.
• You can also migrate objects which are specific to applications such as Asset Liability
Management, Funds Transfer Pricing, or Profitability Management, if you have installed those
applications.
NOTE Apart from this method, you can migrate objects through
Command Line Utility to Migrate Objects or Offline Object
Migration (UI Based) process based on whether the objects you
want to migrate are supported in that approach.
NOTE If you have used the Master Table approach for loading
dimension data and set it up to generate surrogate keys for
members, this results in different IDs between the Source and
Target. So it may cause error if you try to migrate objects which
depend on these IDs.
• Migration of Infrastructure UAM Objects happens over a secure Java Socket based
communication channel. To facilitate effective communication between the Source and Target
systems and also to display the UAM objects from the source, you need to import the SSL
certificate of Source in to the Target. For information on importing SSL certificate, see How to
Import SSL Certificate for Object Migration (Doc ID 1623116.1).
• For Object migration across setups, migration process should always be triggered from the
target setup. You need to login to the target setup and select the required information domain.
Object Migration works more like an IMPORT into the Target. Thus, in case of migrating objects
within the same setup across Information Domains, you need to have logged into the Target
Information Domain in order to migrate the objects.
• Before migrating a DQ Group, ensure the DQ Rules present in that DQ Group are unmapped
from all other groups in the target. That is, if a DQ Rule is mapped to one or more DQ Groups in
the target, then it has to be unmapped from all the groups before migration.
• The following object types will not be migrated with their parent objects even though they are
registered as dependencies:
Currencies registered as dependents of Interest Rate Codes (IRCs).
Dimension Members registered as dependents.
Ensure that these dependencies exist in the target environment prior to the migration of parent
object.
You (AAI System Administrator) need to have FU_MIG_HP function role mapped to access the Object
Migration framework within Infrastructure.
The Object Migration Summary window displays the list of pre-defined Object Migration rules with the
other details such as Name, Folder, Source Infodom, Access Type, Modification Date, Last Execution
Date, Modified By, and Status. You can use the Search option to search for a required Object Migration
rule based on the Name or Folder in which it exists. The pagination option helps you to view the list of
existing Object Migration rules within the system.
In the Object Migration Summary window you can do the following:
• Defining Source Configuration
• Creating Object Migration Definition
• Viewing Object Migration Definition
• Modifying Object Migration Definition
• Copying Migration Rules
• Migrating Stored Object Rules
• Viewing Migration Execution Log
1. Click Configuration from the Object Migration tool bar. The Source Configuration window
is displayed with the pre-configured database details.
You can also click View Configuration to view the pre-configured database details.
2. Click adjacent to the Name field. The window is refreshed and enables you to enter the
required details.
3. Enter a Name for the source connection and add a brief Description.
4. Enter the Source Database details as tabulated:
Table 171: Fields in the Source Configuration window and their Descriptions
Field Description
Field Description
Source Infodom Enter the source Information Domain on which the database exists.
Table 172: Fields in the Object Migration Details window and their Descriptions
Field Description
Select the required folder from the drop-down list. This folder refers to
Folder
the folder associated with the Object Migration rule.
Enter a name for the Object Migration definition. Ensure that there are
Name
no special characters or extra spaces in the name specified.
Select the required source configuration from the drop-down list. The list
Source displays the available source configurations that are created from the
Configuration window.
Select this checkbox to overwrite the target data, if source objects exist in
Overwrite Object
the target setup.
Source Segment/Folder All the registered objects for the selected source segment/folder are
displayed in the Source Infodom table.
Note: If you leave Source Folder blank, the Source Infodom table
displays all objects in all the folders to which you have access in the
source environment.
For some object types, there are additional selections. For example, if
Object-type specific you select the object type as Filters, you can select the required Filter
selections, such as Type from the drop-down list. The Source Infodom table displays all
Filter Type objects belonging to the selected Filter Type. If you leave Filter Type
blank, all filters will be displayed.
Field Description
Field Description
All available objects are displayed based on your selection of object type
and (if applicable) source segment/folder.
• Select the checkbox corresponding to the required object and click
to migrate the object to the target folder. You can also double
click to select the required object.
Source Infodom Table • Click to select all the listed objects for migration.
• You can use the Search and pagination options to find the required
object. Click the Search button and enter the name or description
in the Search window. Use Reset button to clear the search
criteria.
• Use the button to find an object displayed on the current page.
All objects which you have selected for migration are displayed.
• Select the checkbox corresponding to the required object and click
Target Infodom Table to remove the object from migration. You can also double click
to remove the required object.
3. The Selected Objects grid shows all objects you have explicitly selected, for all object types.
4. Click button from the Selected Objects tool bar to populate the complete object details
such as Target Modification Date (if object exists in target Infodom) and Operation
(Add/Update) that can be performed during migration.
5. The Dependent Objects grid shows all objects which are automatically migrated due to a
dependency in a parent object.
6. Click button from the Dependent Objects tool bar to display the dependencies of the
selected objects.
To view the dependencies of a specific object, click on the object Name in either the Selected
Objects grid or the Dependent Objects grid. The parent / child dependencies are displayed in
the Parent / Child Dependency Information window.
You can also toggle the view of Parent / Child dependency information by selecting Parent or
Child in the Dependency Information grid.
7. The Audit Trail section will display details about Object Migration Rule creation and
modification, after it is saved. You can add comments from the User Comments tab.
8. Click Migrate to save and migrate the selected source objects to target setup or click Save to
save the Object Migration definition for future migration. You can later run the saved object
migration rule. For more information, see Migrate Stored Object Definition section.
Once the migration starts, the source objects are migrated to target setup and the Migration details
such as status, start, and end time are recorded. You can click View Log in the Object Migration
Summary window to view the details.
2. Click View button in the Object Migration tool bar. The View - Object Migration window is
displayed.
3. Click button from the Selected Objects tool bar to refresh the properties.
4. Click button from the Dependent Objects tool bar to display the dependencies of the
selected Object.
5. To view all dependencies of an object, click the object Name. The parent / child dependencies
are displayed in the Parent / Child Dependency Information window.
2. Click Edit in the Object Migration tool bar. The Edit - Object Migration window is displayed.
3. Edit the required details. For more information, see Creating Object Migration Definition.
In the Object Migration Summary window, you can also click Delete button to delete the
Object Migration Definition details.
2. Click Copy in the Object Migration tool bar. Copy button is disabled if you have selected
multiple migration rules.
3. Edit the Migration Rule Definition as required. You can modify the details such as Folder, Name,
Description, Access Type, Overwrite option, and also view the dependencies of the selected
objects. For more information, see Create Object Migration Definition.
4. Click Migrate to migrate the selected source objects to the target setup or click Save to save the
Object Migration definition for future migration.
The Config Schema Download window facilitates you download data from configuration schema
tables along with the option to filter data during download, in Microsoft Excel 2003/2007 format. The
Config Schema Download window has restricted access and you should have Config Excel Advanced
user role mapped to your user group to download configuration schema data.
To download Config Schema Data:
1. Select the table from the drop-down list. The list consists of those database objects (tables)
which are mapped to Configuration Schema based on a specific configuration.
2. Select the Format to download from the drop-down list. You can either select Microsoft Excel
2003 or 2007.
3. (Optional) If you want to download only the required data instead of complete table data,
specify a filter condition in Filter(where clause) field.
For example, if you want to download Group Code details from the table “cssms_group_mast”,
you can specify the filter condition as:
select * from cssms_group_mast where v_group_code in ('AUTH')
4. Select Download.
The File download dialog box is displayed providing you with options to Open or Save a copy of
the file in selected excel format.
3. In Select the Sheet field click button, the Sheet Selector pop-up window is displayed. Select
the required sheet from the drop-down list and click OK.
4. In the Upload Type options, select one of the following:
Incremental - In this type of upload, the data in Excel sheet is inserted / appended to the
target database object. The upload operation is successful only when all the data in the
selected Excel Sheet is uploaded. In case of any error, the uploaded data will be rolled back.
Complete - In this type of upload, the data present in the selected database object is
overwritten with the data in selected Excel sheet. In case of an error, data in the selected
database object will be reverted back to its original state.
5. In Source Date Format field, specify the date format used in the data that you are uploading. An
insert query is formed based on the date format specified.
6. Select Upload. If you have selected Complete upload type, you will need to confirm to overwrite
data in the confirmation dialog box.
An information dialog box is displayed with the status of upload. You can click on View Log to
view the log file for errors and upload status. The log file contains the following information:
Database object (table) to which the data is uploaded.
Name of the excel file from which the data is uploaded.
Number of records uploaded successfully.
Number of records failed during upload and reason of failure.
Upload Status (Success/Fail).
13.7 Utilities
Utilities refer to a set of additional tools which helps you to fine tune a defined process or maximize
and ensure the security of a database based on your need. The Utilities within the Administration
framework of Infrastructure system facilitates you to maintain the data in the Oracle database using
the various administrative tools. You can define the user access permissions, batch securities, upload
attributes, find metadata difference, and migrate source objects to target database.
You (System Administrator) need to have SYSADM function role mapped to access the Utilities section
within the Infrastructure system. You can access Utilities section within the Administration framework
under the tree structure of LHS menu.
To access various utilities, go to the Object Administration tab and click Utilities.
Administration Utilities consists of the following sections. Click on the links to view the sections in
detail.
• Metadata Authorization
• Metadata Difference
• Save Metadata
• Write-Protected Batch
• Component Registration
A list of the metadata versions is displayed along with the other details such as Code, Short
Description, Action Performed, and Performed By details for the selected metadata definition.
3. Select the checkbox adjacent to the required version of the selected metadata and do one of the
following:
Click Authorize to accept the metadata changes of the selected version.
Click Reject to ignore the metadata changes and delete the selected version.
The window is refreshed on every action and the updates are displayed in the respective tab of
the Metadata for Authorization window.
You (System Administrator) need to have SYSADM function role mapped to access the Metadata
Resave window. The Metadata Resave window displays the list of Available Metadata for Hierarchy
(default) for the selected Information Domain.
Select the required metadata from the Available Metadata list and click button. You can
press Ctrl key form multiple selection.
You can also deselect a metadata by selecting from the Selected Metadata list and clicking
button or deselect all the selected metadata by clicking button.
2. Click Save and update the metadata changes. Status of operation is displayed.
The Write-Protected Batch window displays the list of defined Batches for the selected Information
Domain along with the other details such as Batch Name, Batch Description, and Write-Protection
status. By default, the Batch list is sorted in ascending order of the Batch Name and can be changed
by clicking and buttons respectively.
To change the Editable State of Batch in the Write-Protected Batch window, do the following:
• To change the Batch state as “Non Editable”, select the Write-Protected Batch checkbox of the
required Batch in the list and click Save. The Batch details are restricted from being edited in
the Batch Maintenance/Scheduler window.
• To change the Batch state as “Editable”, deselect the Write-Protected Batch checkbox of the
required Batch in the list and click Save. The Batch details can be modified as required in the
Batch Maintenance/Scheduler window.
• You can also click Check All to write-protect (restrict editing) all the batches in the list or click
Uncheck All to remove the restriction and allow editing of all the Batches.
2. Select the required metadata by expanding the required node. Click OK.
You can also click button to clear the metadata and version selections.
The Patch Information window dynamically displays a list of applied patches & applications installed
along with the Patch or Application Name, Information Domain on which the patch/application has
been installed, and Additional Information (if any). These records are fetched from the corresponding
tables in the database and are sorted in the ascending order of Applied Date by default.
You can search for a specific patch/application installation based on Patch/Application Name or
Information Domain.
3. Select the documents from Available Documents whose ownership you want to transfer by
clicking button. The documents will be moved to the Selected Documents pane. You can
click to select all documents.
4. Click Save.
13.8 References
This section of the document consists of information related to intermediate actions that needs to be
performed while completing a task. The procedures are common to all the sections and are referenced
where ever required. You can see the following sections based on your need.
Sales Officer 6
• Sales Manager Auto Loans
Sales Officer 7
Sales Officer 8
Products
• Personal Loans
• Mortgages
• Credit Cards
• Auto Loans
Each product is marketed by a separate team and which is headed by a Sales Manager who reports to
the Sales Head. Each Sales Manager in turn has two Sales Officers who are responsible for sales and
profitability of the product.
The Sales Head has decided that the Sales Officer of each product will not have access to the
information of other products. However, each Sales Manager will have access to Sales figures of the
other products.
Using the Oracle Infrastructure Security Hierarchy feature Administrator can provide information
security at hierarchy level by defining security options for each hierarchy node. Thus, the Bank can
control access of information at a node level and not increase the overheads.
This is how it is done in Oracle Infrastructure:
• First, the users are created in Oracle Infrastructure and then, a business hierarchy (as defined
above) is created.
• Now, the bank can restrict access of certain information to certain people in the Hierarchy
Security configuration.
• In this window, the administrator can control security by mapping the users to various nodes in
hierarchy.
For example, the administrator maps Sales Officer 1 and Sales Officer 2 to only the Personal
Loans Node in the Product hierarchy. This restricts Sales Officer 1 and 2 to only viewing and
maintaining their particular node in the hierarchy.
By default, all the users mapped to a domain can access all the hierarchy levels to which they
are mapped. This function allows the administrator to restrict or exclude a user/s from
accessing restricted nodes.
Table 173: Details of the Role Code, Name, and their Descriptions
DEFQMAN DEFQ Manager Data Entry Forma and Query Manager Role
Batch Cancellation
Execute Batch
Batch Processing
Data Centre Manager Operator Console
Create Batch
View log
Delete Batch
DeFi Excel
Excel Admin
DEFQ Manager Defq User
Excel User
Defq Administrator
Fusion Expressions Admin Fusion Add Expressions Fusion Expressions Home Page
Configuration
Metadata Segment Map
Database Details
Infrastructure Operator Console
Database Server
Administrator Infrastructure Administrator
Hierarchy Security
Infrastructure Administrator Window
Information Domain
Modify Dimension
Add Dataset
Modify Hierarchy
Add Dimension
Modify Measure
Add Hierarchy
Modify Oracle Cube
Add Measure
View Alias
Oracle Cube Administrator Add Oracle Cube
View Dataset
Authorize Oracle Cube
View Dimension
Business Analyst User Window
View Hierarchy
Delete Oracle Cube
View Measure
Modify Dataset
View Oracle Cube
Administration Window
Infrastructure Administrator Window
Profile Maintenance Window
System Authorizer
System Administrator Window
System Authorizer
User Authorization Window
14.1.1 Prerequisites
• You must have access and execution rights in the $FIC_HOME/utility/Migration/ directory in
both the source and target environment.
• Folders (segments) and user groups that are designated for the import should be present in the
target.
• The source and target environment should have the same installed locales.
• OFSAA users in source should be the same in target (at least for users associated with objects
migrated).
• OFSAA users should have access to folders in target as well as source.
• Underlying tables of the objects being migrated should exist in target.
For example, if you want to migrate a Data Element Filter based on "Table A" and "Table B" in
the source, those two tables should exist in the target.
• For AMHM Dimensions and Hierarchies:
The key processing Dimensions should be the same in both the source and target
environments.
For Member migration, the Dimension type should have the same attributes in both source
and target environments.
Numeric Dimension Member IDs should be the same in both the source and target
environments, to ensure the integrity of any Member-based objects.
NOTE If you have used the Master Table approach for loading
Dimension data and set it up to generate surrogate keys for
Members, this results in different IDs between the source and
target, so it may cause errors if you have objects which depend
on these IDs.
• All objects that generate new ID after migrating to a different information domain and all
components which are registered through the Component Registration window, which will be
used in the RRF, must be manually entered in AAI_OBJ_REF_UPDATE table in the Configuration
Schema. The implicit migration of dependent objects is not supported. They should be migrated
explicitly. The attributes present in the table are:
V_OBJECT_TYPE- EPM Object Type
V_RRF_OBJECT_TYPE- RRF object Type. The ID can be referred from
pr2_component_master table
NOTE The values in the properties file are updated by the installer. If
you want to run this utility from another location, the values
should be specified accordingly.
Table 175: Names in the Object Migration XML and their Descriptions
Name Description
3. Update the OBJECTMIGRATION.xml file as explained below based on whether you want to
import or export objects:
USERID Specify the user ID of the OFSAAI user who will be running the
migration utility. Ensure the user is mapped to the specific
source Information Domain / Segment.
The user id should be provided in capital letters.
Note: The User ID or Service accounts are “SMS Auth Only” in
case of SSO and LDAP configured setups.
FILE Specify the name of the dump file which will be created under
$FIC_HOME/utility/Migration/metadata/archive folder as a
.DMP file.
MIGRATION_CODE Enter the unique migration code to identify the status of the
migration process.
For example: 8860
OBJECT Code Specify the object Code which should be a unique identifier of
the definition according to the Type of the object in the
Information Domain. Code should be either system generated
or user defined unique code. See the Objects Supported for
Command Line Migration section to know for a particular
object whether it is user defined or system generated.
Note: Object Code is case sensitive.
You can specify the Code value as wildcard “*” if you are
migrating all objects of that Type.
For example, to export all Rules from RRF:
<OBJECTS>
<OBJECT Code="*" Type="112" />
</OBJECTS>
To export multiple objects of a particular object type, multiple
entries with each object code should be made in the
OBJECTMIGRATION.xml file.
For example, if you want to export three different rules, the
entries should be made as given below:
<OBJECTS>
<OBJECT Code="Rule Code_1" Type="112" />
<OBJECT Code="Rule Code_2" Type="112" />
<OBJECT Code="Rule Code_3" Type="112" />
</OBJECTS>
To export ETL objects, the format is Data Mapping Code
followed by Type=”122”.
For example,
<OBJECT Code="FCTPRODUCT" Type="122" />
Note: Only the latest version will be archived and it will be
restored as new version.
To export Enterprise Modeling Objects which supports
versioning, the version of the object should be a part of the
Code attribute.
<OBJECTS>
<OBJECT Code="ModelID_Version" Type="1305"
/>
</OBJECTS>
SubType SubType is available for Filters and AMHM hierarchy only. This
is a mandatory field.
For filters, SubType indicates the type of the filter. For
hierarchies, this indicates the Dimension ID.
See the table for filter SubTypes.
Example: For Group Filter,
<OBJECTS>
<OBJECT Code=”200265" Type="1"
SubType=”21”/>
</OBJECTS>
1. After you have updated the files with required information in the source environment, navigate
to $FIC_HOME/utility/Migration/bin path and execute ObjectMigration.sh. The
dump file will be created.
2. Once executed, you can view the related log files from the
$FIC_HOME/utility/Migration/logs location.
USERID Specify the user ID of the OFSAAI user who will be running
the migration utility. Ensure that the user is mapped to the
specific target Information Domain / Segment.
The user id should be provided in capital letters.
Note: The User ID or Service accounts are “SMS Auth Only” in
case of SSO and LDAP configured setups.
FOLDER Specify the Code of the folder /segment to which you need to
import objects.
This field is optional. The folder value should be provided in
capital letters.
Note: This is the default target folder if object specific
TargetFolder is not provided. However, if both FOLDER and
TargetFolder are not specified, then source folder available
in the exported dump file will be considered as target folder.
For behavior in this release, see Limitations section.
IMPORTALL Y indicates that all exported objects in the .DMP file (dump)
will be imported (regardless of any specific OBJECT entries in
the OBJECTMIGRATION.XML file).
Example:
<IMPORTALL TARGETFOLDER="BASEG">Y</IMPORTALL>
N indicates that only objects explicitly specified in the
OBJECTMIGRATION.XML file will be imported (provided they
are already exported and available in the dump file).
Note: When migrating Sandbox, IMPORTALL should be N.
OBJECT Code Specify the object Code which should be a unique identifier
of the definition according to the Type of the object in the
Information Domain. Code should be either system
generated or user defined unique code. See the Objects
Supported for Command Line Migration section to know for a
particular object whether it is user defined or system
generated.
Note: Object Code is case sensitive.
You can specify the Code value as wildcard “*” if you are
importing all objects of that Type.
For example:
<OBJECTS>
<OBJECT Code="*" Type="112" />
</OBJECTS>
To import multiple objects of a particular metadata type,
multiple entries with each metadata code should be made in
the OBJECTMIGRATION.XML file.
For example, if you want to import three different rules, the
entries should be made as given below:
<OBJECTS>
<OBJECT Code="Rule Code_1" Type="112"
/>
<OBJECT Code="Rule Code_2" Type="112"
/>
<OBJECT Code="Rule Code_3" Type="112"
/>
</OBJECTS>
Note: Specify only those Codes that are present in the
exported dump file.
To import Enterprise Modeling Objects which supports
versioning, the version of the object should be a part of the
Code attribute.
<OBJECTS>
<OBJECT Code="ModelID_Version"
Type="1305" />
</OBJECTS>.
Type Specify the Type ID of the required metadata objects to be
imported. Refer to the Objects Supported for Command Line
Migration section.
Note: You need to specify only those Types, which are
present in the exported dump file.
1. Once you have updated the files with required information in the target environment:
Create metadata/ restore folder under $FIC_HOME/utility/Migration directory (if
not present).
Copy the exported .DMP file that needs to be imported to
$FIC_HOME/utility/Migration/metadata/restore folder.
Navigate to $FIC_HOME/utility/Migration/bin path and execute
ObjectMigration.sh.
2. Once executed, you can view the related log files from the
$FIC_HOME/utility/Migration/logs location.
Name Description
Specify the user ID of the OFSAAI user who will be running the
migration utility. Ensure the user is mapped to the specific source
Information Domain / Segment.
USERID
The user id should be provided in capital letters.
Note: The User ID or Service accounts are “SMS Auth Only” in
case of SSO and LDAP configured setups.
Name Description
Name Description
Table 179: Column Names in the export file and their Descriptions
Object Code Specify the object Code which should be a unique identifier of the definition
based on the Object Type. It should be either system generated or user
defined unique code. See the Objects Supported for Command Line
Migration section to know for a particular object whether the code is user
defined or system generated.
You can specify the object Code value as wildcard “*” if you are migrating all
objects of that Object Type.
Object Type Specify the Type ID of the required metadata objects to be exported. Refer to
the Objects Supported for Command Line Migration section.
Object Sub Type SubType is available for Filters and AMHM hierarchy only. This is a
mandatory field.
For filters, SubType indicates the type of the filter. For hierarchies, this
indicates the Dimension ID.
See the table for filter SubTypes.
Sandbox Infodom Specify the Sandbox Information Domain name to export Sandbox.
With Models Specify Y if you want to export all models present in the Sandbox Infodom
along with the Sandbox.
Specify N if you want to export only the Sandbox.
Include Dependency Specify Y if you want to export all dependent objects along with the base
objects.
Specify N if you want to export only the mentioned object.
1. After entering the required details of the objects you want to export in the export_input.csv
file, navigate to $FIC_HOME/utility/Migration/bin path and execute
ObjectMigration.sh. The dump file will be created, which will have an import_input.csv with
list of all objects (including dependent ones) that are being exported.
2. Once executed, you can view the related log files from the
$FIC_HOME/utility/Migration/logs location.
Table 180: Column Names in the export file and their Descriptions
Object Code Specify the object Code which should be a unique identifier of the definition
based on the Object Type. It should be either system generated or user defined
unique code. See the Objects Supported for Command Line Migration section
to know for a particular object whether the code is user defined or system
generated.
You can specify the Object Code value as wildcard “*” if you are importing all
objects of that Object Type.
Note: Specify only those Codes that are present in the exported dump file.
Object Type Specify the Type ID of the required metadata objects to be imported. See the
Objects Supported for Command Line Migration section for Object Type IDs.
Object SubType SubType is available for Filters and AMHM hierarchy only. This is a mandatory
field.
For filters, SubType indicates the type of the filter. For hierarchies, this
indicates the Dimension ID.
See the table for filter SubTypes.
Sandbox Infodom Specify the Sandbox Information Domain name to import Sandbox.
With Models Specify Y if you want to import all models present in the Sandbox Infodom
along with the Sandbox.
Specify N if you want to import only the Sandbox.
Include Dependency Specify Y if you want to import all dependent objects along with the base
objects.
Specify N if you want to import only the mentioned object.
Is Base Object This attribute is for information and is not read while processing the input. This
will be set as Y if the exported object is a base object and will be N for all the
exported dependent objects.
Object Group and Object Specify a unique ID to the Object Group and the folder to which you want to
Group Target Folder import all the objects in that Object Group.
If Object Group is not specified, by default it takes the object group ID of the
preceding entry with Object Group. If the object group ID for the first entry is
not explicitly entered, it is assigned the value as ‘1’.
If object Group ID is specified and Object Group Target Folder is kept blank, the
objects of that Object Group will be imported to the folder mentioned in the
FOLDER tag in the migration.properties file. If that is also not
mentioned, it will be imported to the source folder mentioned in the dump file.
Note: An object with an Object Group ID different from the preceding object will
go to a new group. Hence, enter all the objects which you want to import to the
same folder successively.
Once you have updated the files with required information in the target environment:
• mig_group_001 and mig_group_002 belong to Group 1 and they will be imported to folder
EMFLD.
• mig_group_003 and mig_group_004 belong to group 2 and they will be imported to folder
IPEFLD.
• mig_group_005 will be imported to the default folder set under <FOLDER> tag.
• mig_group_006 will be imported to the default folder set under <FOLDER> tag even though
the Object Group ID is same as that of mig_group_001. If you want mig_group_006 to be
imported to the same folder (EMFLD), then either you have to explicitly give the Object Group
Target Folder along with Object Group or mig_group_006 entry should be inserted before a
change in the User Group ID. That is, in the previous example, before the entry for
mig_group_003.
14.1.4 Limitations
• For AMHM objects, irrespective of values specified in TargetFolder or FOLDER tags, the objects
are migrated to the source folder available in the exported dump file. Hence, ensure folder with
same name as it is in the dump file is present in target environment.
• Ensure the specified Folder is present in the target environment during IMPORT operation.
Currently validation is not done.
1 Data Transformation objects, that is, Post Load Changes definitions based on Stored Procedures only are supported for migration.
2 Object migration support for Slow Changing Dimensions definitions are from OFSAAI 8.1.2.2.0 release.
3 You can specify the name of the sandbox Infodom which you want to migrate for SANDBOXINFODOM attribute and Y for WITHMODELS attribute to
migrate the models along with the sandbox.
The following tables provides the details of the Object Name and their Types.
DataElement Filter 4
Hierarchy Filter 8
Group Filter 21
Attribute Filter 25
ALIAS 54 NA NA
DATASET 104
ALIAS 54
BUSINESS MEASURE 101
DERIVED ENTITY 128
ALIAS 54
DATASET 104
DERIVED ENTITY 128
DATASET 104
DATASET 104
DATASET 104
MEASURE 101
HIERARCHY 103
GROUP FILTER 21
ATTRIBUTE FILTER 25
HIERARCHY FILTER 8
RULE 112
CUBE 106
MODEL 1305
RULE 112
PROCESS 111
RUN 110
CUBE 106
MODEL 1305
GROUP FILTER 21
ATTRIBUTE FILTER 25
HIERARCHY FILTER 8
MEMBERS NA
DIMENSION 12
ATTRIBUTES NA
FILTER 1 ATTRIBUTES NA
FILTER 1
EXPRESSION 14 EXPRESSION 14
SANDBOX 2 1300 NA NA
DATASET 104
TECHNIQUE 1302 NA NA
VARIABLE 1301
TECHNIQUE 1302
MODEL 1305
VARIABLE 1301
DATASET 104
DataElement Filter 4
RUN 110
STRESS 1306
SCENARIO 1304
FUNCTION 2003 NA NA
PROFILE 2004 NA NA
Questionnaire NA NA
Configuration Attributes 8001
Questionnaire Configuration
8001
Questionnaire Definitions 8003 Attributes
14.1.7.1 Pre-requisites
To ensure successful migration of all mappings, you must import the SMS objects in the following
order:
• Functions
• Roles
• User Group
• User
For example: If you want to import User-User Group mapping, then you must migrate the User Group
first followed by User.
For more information on migrating object, see Migrating Objects section.
Arguments Description
Refers to the date with which the data for the execution would be
MisDate
filtered.
Arguments Description
Build Flag refers to the pre-compiled rules, which are executed with
the query stored in database.
Built Flag status set to "No" indicates that the query statement is
BuildFlag
formed dynamically retrieving the technical metadata details.
If the Build Flag status is set to "Yes" then the relevant metadata
details required to form the rule query is re-compiled in database.
For example,
ksh RuleExecution.sh RRFATOM_exec_rule_20120904_1 RULE_EXECUTION Task1
20120906 EDW RRFATOM A.B.C.D 1344397138549 N
'$RUNID=,$PHID=,$EXEID=,$RUNSK='
3. You can access the location $FIC_HOME/utility/RuleExecution/logs to view the related log files.
Also the component specific logs can be accessed in the location fic_home/ftpshare/logs.
14.2.2 Command Line Utility for Fire Run Service\ Manage Run
Execution
Manage Run Execution utility can be used to execute Run definitions through RESTful Web Services
call. To achieve this, RESTful Service, Client and Shell script are available.
Arguments Description
Refers to the date with which the data for the execution would be
MISDATE
filtered.
14.3.1 Prerequisites
• All the required XML files like TFM XML, ETL Repository XML, Definition XML, Properties XML,
Mapping XML must be present in the standard paths. (relative to the ftpshare folder)
• Table AAI_ETL_SOURCE must be present in the Config Schema, with all appropriate
information.
• Ensure the DMTUpgradeUtility_806.sh file is present in
$FIC_HOME/utility/DMT/Migration/bin folder.
• Ensure aai-dmt-migration.jar must be present in
$FIC_HOME/utility/DMT/Migration/lib. (This jar and other dependent OFSAA jars are
available in the aforementioned path. The DMTUpgradeUtility_806.sh file contains the list
of such jars.)
• Ensure the Clusters.XML file is present in the $FIC_HOME/conf directory.
• Ensure the ETLLoader.properties file is present in the $FIC_HOME/ficdb/conf directory.
To run the utility directly from the console:
1. Navigate to $FIC_HOME/utility/DMT/Migration/bin folder.
2. Execute ./DMTUpgradeUtility_806.sh with the following arguments:
METADATA TYPE Specify the metadata type that you • ALL- to migrate all metadata types
want to migrate. • Enter the specific metadata type that
you want to migrate. The available
metadata types are DMT_SRC,
DMT_PLC, DMT_DM (to migrate F2T,
T2T, and T2F), CLUSTERINFO (to
migrate Cluster information),
ETLPROPINFO (to migrate
ETLLoader.properties)
Note: DMT_SRC Metadata Type is
supported only for Migration Type set as
UPGRADE and ONLY_DEFINITION. Data
Sources based on Table and WebLog are
only supported for migration.
INFODOM NAME Specify the information domain name. • ALL- to migrate metadata from all
This argument is applicable only for information domains.
MIGRATION TYPE as • Enter the specific information domain
ONLY_DEFINITION and name if you want to migrate metadata
ONLY_DEFINITION_AS_VERSION. of a particular information domain only.
DEFINITION NAME Specify the definition name that you • ALL- to migrate all definitions
want to migrate. • Enter the specific definition name that
This argument is applicable only for you want to migrate.
MIGRATION TYPE as • For DMT_SRC metadata type, specify
ONLY_DEFINITION and as <Source Name 1>~<Infodom
ONLY_DEFINITION_AS_VERSION. 1>,<Source Name
2>~<Infodom 2>,<Source
Name3>~<Infodom 3>. That is, list
of source and corresponding Infodom
combination separated by comma.
• For DMT_DM metadata type, specify as
<Application Name>~<Source
Name>~<Definition Name>.
• For DMT_PLC metadata type, specify
the definition name.
In this scenario, the specified metadata type will be migrated to the corresponding tables by
incrementing the version if the definition already exists in the target environment. If
<METADATA_TYPE> is set as ALL, all metadata types will be migrated.
For example,
./DMTUpgradeUtility_806.sh UPGRADE_AS_VERSION DMT_PLC
Note that INFODOM NAME and DEFINITION NAME will be implicitly set to ALL, irrespective of what
the user sets.
If metadata type is not set, it is implicitly set as ALL. For example, if you execute the following
command, all metadata will be migrated:
./DMTUpgradeUtility_806.sh UPGRADE_AS_VERSION
MIGRATION TYPE set as ONLY_DEFINITION
./DMTUpgradeUtility_806.sh ONLY_DEFINITION <Metadata type> <information
domain name> <Definition name>
This mode is used to migrate XML data of a particular definition to the corresponding tables. In this
mode, it is mandatory to set METADATA TYPE, INFODOM NAME and DEFINITION NAME arguments.
Otherwise, the utility execution will fail.
For example,
./DMTUpgradeUtility_806.sh ONLY_DEFINITION DMT_DM OFSAAINFO <Application
Name>~<Source Name>~<Definition Name>
./DMTUpgradeUtility_806.sh ONLY_DEFINITION DMT_DRC <Source Name 1>~<Infodom
1>,<Source Name 2>~<Infodom 2>,<SourceName3>~<Infodom3>
NOTE The Metadata Type DMT_SRC is supported only for table based
sources in ONLY_DEFINITION mode.
For Metadata Type DMT_DM, <information domain name>
should be a valid Infodom name, but the definition will not be
migrated to the specified Infodom name. It will be migrated to
all its mapped Information Domains, which are listed in the
ETLrepository.xml file.
In case of rerun of the migration utility, if a metadata definition is already present in the target
environment, that definition will be skipped.
MIGRATION TYPE set as ONLY_DEFINITION_AS_VERSION
./DMTUpgradeUtility_806.sh ONLY_DEFINITION_AS_VERSION <Metadata type>
<information domain name> <Definition name>
This mode is used to migrate XML data of a particular definition to the corresponding tables by
incrementing the version if the definition already exists in the target environment. In this mode, it is
mandatory to set METADATA TYPE, INFODOM NAME and DEFINITION NAME arguments. Otherwise,
the utility execution will fail.
For example,
./DMTUpgradeUtility_806.sh ONLY_DEFINITION_AS_VERSION DMT_DM OFSAAINFO
F2Tdefinition1
For Metadata Type DMT_DM, <information domain name> should be a valid Infodom name, but the
definition is not migrated to the specified Infodom name. It will be migrated to all its mapped
Information Domains, which are listed in the ETLrepository.xml file.
3. The following properties have been changed and will not be migrated from the
ETLLoader.properties file into the AAI_DMT_DB_CLUSTER_PROPERTY table. The user must
manually update the AAI_DMT_DB_CLUSTER_PROPERTY table with the new values, or use the
DMT Configurations window to update these values. The values must go into source or target
clusters as required.
SQOOPSERVER_NAME -> SSH_HOST_NAME
SQOOPSERVER_SSH_PORT -> SSH_PORT
SQOOPSERVER_SSH_USERID -> SSH_USERID
SQOOPSERVER_SSH_PASSWORD -> SSH_PASSWORD
4. In case of PLC Migration, ensure the function defined for the Stored Procedure in the <Infodom
name>_TFM.XML is same as the actual function in the Atomic Schema. In case of mismatch, in
the Edit mode of the PLC definition, the actual function in the Atomic Schema is replaced by the
function in the <Infodom name>_TFM.XML. If the SQL in Transformation has compilation
errors, modification of PLC definition will fail.
14.3.4 Logs
The following logs will be created in $FIC_HOME/utility/DMT/Migration/log folder:
• DMTMigrationUtility.log- This is a debug log. All parsing related information will be
available in this log file.
• DMTMigrationUtilityReport.log - This log file gives the status of all metadata that have
been migrated.
For errors during metadata save, see <Deployed Path>/webroot/logs/OFSAA.log.
14.3.5 Troubleshooting
In case of unsuccessful migration, refer the following logs for further debugging:
1. Make a note of failed T2Ts if any, from the report log (DMTMigrationUtilityReport.log). If
migration failed due to seeded xml errors, it will be logged in detailed migration log
(DMTMigrationUtility.log). Search this log with the Definition code to find the exact error.
2. If this doesn’t give sufficient information, see $ftpshare/logs/Migration/DMT/
DMTMigrationService.log for further details. Search this log with the Definition code to
find the exact error.
NOTE For FAQs and use cases related to DMT Metadata Migration
Utility, see FAQ section in OFSAA DMT Metadata Migration
Guide.
Prerequisites
• Ensure the following files are present in $FIC_HOME/utility/DMT/encryption/bin folder.
dmtfileencryption.sh
aai-dmt-encryption.jar
log4j-core*.jar
log4j-api*.jar
• Since the utility uses AES 256 bit encryption, it is mandatory to apply policy files. Perform the
following instructions to apply policy files:
a. Download the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy
Files from Oracle. Be sure to download the correct policy file updates for your version of
Java (Java 7 or 8).
b. Uncompress and extract the downloaded file. The download includes a Readme.txt and two
.jar files with the same names as the existing policy files.
c. Locate the two existing policy files inside the folder <java-jre-home>/lib/security/.
local_policy.jar
US_export_policy.jar
d. Replace the existing policy files with the unlimited strength policy files you extracted.
To run the utility directly from the console:
1. Navigate to $FIC_HOME/utility/DMT/encryption/bin folder.
2. Execute ./dmtfileencryption.sh with the following arguments:
Modes of Operation
Based on the value specified for the argument MODE, the utility can be operated in different modes:
MODE set as genkey
./dmtfileencryption.sh genkey <KEYFILE>
In this mode, utility takes the absolute path to which key has to be written as input. Creates a 256 bit
AES key and writes to the location given in <KEYFILE> attribute.
MODE set as encrypt_file
./dmtfileencryption.sh encrypt_file <INPUTFILE> <OUTPUTFILE> <KEYFILE>
In this mode, utility takes input file path, output file path and key file path as inputs. Using the 256 bit
AES key in the given key path, input file is encrypted and written into given output file path.
MODE set as decrypt_file
./dmtfileencryption.sh decrypt_file <INPUTFILE> <OUTPUTFILE> <KEYFILE>
In this mode, utility takes input file path, output file path and key file path as inputs. Using the 256 bit
AES key in the given key path, input file is decrypted and written into given output file path.
Logs
The DMTFileEncryption.log file will be created in $FIC_HOME/utility/DMT/encryption/log
folder.
5. The publish execution specific log information is present in the MDBPublish.log file available
at the <DEPLOYED LOCATION>/<Context>.ear/<Context>.war/logs folder.
To run the utility through the Operations module:
1. Navigate to the Operations module and define a batch.
2. Add a task by selecting the component as RUN EXECUTABLE.
3. Enter Metadata Value as mentioned in the example.
For Example:
Component ID: RUN EXECUTABLE
Metadata Value (Executable) like:
MDBPublishExecution.sh,LANG611INFO
(where LANG611INFO is the Infodom)
Batch = Y
4. You can access the location $FIC_DB_HOME\log\MDBObjAppMap.log to view the related log
files.
NOTE The User ID or Service accounts are “SMS Auth Only” in case of
SSO and LDAP configured setups.
HIERARCHY Code- Specify the hierarchy codes separated by tilde “~” or caret “^” to resave
only those hierarchies. Specify the hierarchy codes separated by exclamation mark “!” to
exclude those hierarchies from resaving.
Asynchronous Mode- Specify whether you want to save the hierarchy in synchronous
manner or not. No indicates saving of hierarchies will happen only after the population of
the REV_BIHIER and REV_LOCALE_HIER tables in the atomic schema. This is an optional
parameter and if it is not mentioned, it will be in asynchronous mode.
./RUNIT.sh INFODOM USERID HIERARCHY_CODE1^HIERARCHY_Code2 OPTIONAL
PARAMETER
Exampel 1:
./RUNIT.sh OFSAAINFO AAAIUSER HR01^HR02 NO
Or
./RUNIT.sh OFSAAINFO AAAIUSER HR01~HR02 NO
This will resave the hierarchies HR01and HR02 in the OFSAAINFO information domain.
Example 2:
./RUNIT.sh OFSAAINFO AAAIUSER HIE001!HIE002 NO
This will resave all the hierarchies in the OFSAAINFO information domain except the hierarchies
HIE001 and HIE002.
NOTE The User ID or Service accounts are “SMS Auth Only” in case of
SSO and LDAP configured setups.
Metadata Service Type – 856 for Derived Entity and 5 for Essbase Cube
Derived Entity Code for resaving Derived Entities- Specify the derived entity codes
separated by tilde “~”
Or
Essbase Cube Code for resaving Essbase cubes- Specify the Essbase Cube code.
Runtime filter- In case of derived entity, specify the runtime filter to refresh only a selected
set of records.
For example,
For resaving Derived Entities:
./MetadataReSave.sh,INFODOM,USERID,856,<Derived Entity code1>~<Derived
Entity code2>
For resaving Derived Entities with Runtime Filters:
./MetadataReSave.sh OFSAAAIINFO AAAIUSER 856 DE006 3^4 -f
"DIM_ACCOUNT.f_Latest_Record_Indicator = 'Y'"
For resaving Essbase Cube:
./MetadataReSave.sh,INFODOM,USERID,5,<Essbase Code>
NOTE ~ is not supported for Essbase Cubes. Only one Essbase Cube
can be resaved at a time.
14.8.1 Command Line Utility for Resave, Refresh and Delete Partitions
A command line utility called RefreshByPartition.sh is available to resave, refresh and delete
partitions.
To run the utility directly from the console:
1. Navigate to $FIC_DB_HOME/bin of OFSAAI FIC DB tier.
2. Execute RefreshByPartition.sh with proper parameters:
./RefreshByPartition.sh <DSNNAME> <USERNAME> <METADATA SERVICE TYPE>
[<METADATACODE>] <ADD_or_REFRESH_PARTITIONS(SEPARATED BY "^")>
<DELETE_PARTITION(SEPARATED BY "^")>
<DSNNAME> - Information Domain name
<USERNAME> - User Name of the logged in user
<METADATA SERVICE TYPE> - 856 for Derived Entity
[<METADATACODE>]- Derived Entity Code for which you want to refresh, add or delete
partitions
<ADD_or_REFRESH_PARTITIONS> - Specify the Partitions which needs to be added or
refreshed, separated by ^
<DELETE_PARTITION> - Specify the Partitions which needs to be deleted, separated by ^
For example:
./RefreshByPartition.sh TESTCHEF TESTUSER 856 DE003 1^2^3^4^5^6 2^4
Consider 1, 2, 3, 4 are already existing. Then in this case, 1 and 3 will be refreshed, 5 and 6 will be
added and 2 and 4 will be deleted.
V_PARAM_NAME The partition parameter type. For example, addp, delp, and filter.
V_PARAM_VALUE The value for the partition parameter type. For example, in the preceding
illustration, the value for addp is 7^8, the value for delp is 3^4 , and the
value for filter is
DIM_STANDARD_ACCT_HEAD_LATEST_RECORD_INDICATOR="Y".
V_PRESCRIPTS The query for prescripts to modify connection session attributes, this gets
executed before firing the derived entity queries.
V_POSTSCRIPTS The query for postscripts rollback connection session attributes, this gets
executed after completion of derived entity queries.
NOTE Define the Derived Entity ICC Batch with the Batch Parameter
as Y to capture the execution log that identifies with the batch
Id.
Filter logs to view for a particular execution by using the filters on the following columns:
• V_BATCH_RUN_ID
• V_TASK_ID
• V_METADATA_CODE
NOTE This migration assumes the same user group exists in OFSAA.
4. To migrate only user-user group mapping from LDAP server to OFSAA, execute the following
command:
ldapmigration.sh <user> <password> LDAPTOSMS usergroupmap <ldap_server>
<group_search_filter> <group_base>
NOTE This migration assumes the same user group exists in OFSAA.
5. To migrate user groups from OFSAA to LDAP server, execute the following command:
ldapmigration.sh <user> <password> SMSTOLDAP group <ldap_server>
<group_search_filter>
where
<user>- Specify SYSADMN as the user name.
<password>- Specify SYSADMN password.
<ldap_server>- Specify the LDAP server name. For example, ORCL1.in.oracle.com.
<user_search_filter>- Specify filter condition for user search.
<user_base>- Specify user context base.
<group_search_filter>- Specify filter condition for user group search.
<group_base>- Specify group context base.
For example,
ldapmigration.sh SYSADMN password1 SMSTOLDAP group ORCL1.in.oracle.com
OFSAAGRP
ldapmigration.sh SYSADMN password1 LDAPTOSMS user ORCL1.in.oracle.com
objectclass=organizationalPerson cn=Users,dc=oracle,dc=com
dateent.jar
NOTE Ensure that you are provided with the execute permission.
The following are the descriptions for the arguments in the upload.sh file:
<infodom> - Refers to the DSN name. The information domain to where the model
upload to be done.
<entire file path> - Refers to the entire file path of the erwin XML or Database XML.
For example, $FTP_SHARE/$INFODOM/erwin/erwinXML/PFT_model.xml. Set this as
Null for DB Catalog and Data Model Descriptor options.
<username> - Refers to the username of the OFSAA Application.
NOTE The User ID or Service accounts are “SMS Auth Only” in case of
SSO and LDAP configured setups.
DDL Logs Flag- Set this as TRUE to print execution audit logs for Scripts. The logs can be
found at ftpshare/<infodom>/executelogs/<infodom>_DDLLOG_<last data model
version>_<MM.DD.YYYY>-<HH.MM.SS>.log.
Refresh Params – Set this as TRUE to use Database session parameters during model
upload process, else set this as FALSE.
Object Registration Mode – Set it as F for full Object Registration or I for incremental
object registration.
The various parameters to be passed for different modes are shown in the following matrix:
Table 189: Details of the Java settings foe both client and server machines
Model Upload Size of Data Model XML File X_ARGS_APP ENV Variable in OFSAAI APP Layer
Options
36 MB "-Xms2048m -Xmx2048m
NOTE Ensure that you are provided with the execute permission.
Log file in ftpshare folder is empty. The logs are printed in the
console only.
…/erwin/erwinXML X mdl_name
Verify the log files located at $FIC_HOME/utility/SliceJsonGenerateUtility/logs
folder.
NOTE In the users.csv File, enter the Work Email ID for each user. The
Work Email ID is required to reset the password for each user in
the IDCS.
15.1.2 Sample
<OBJECTMIGRATION>
<USERID>user_id</USERID><!--User ID-->
<LOCALE>en_US</LOCALE><!--Locale Information-->
<INFODOM>infodom_name</INFODOM><!--Information Domain-->
<FOLDER>$FOLDER$</FOLDER><!-- Folder/Segment -->
<MODE>EXPORT</MODE><!--EXPORT/IMPORT-->
<FILE>TEST_001</FILE>
<IMPORTALL TargetFolder="$FOLDER$"></IMPORTALL><!—Applicable only for
import-->
<FAILONERROR>Y</FAILONERROR>
<OVERWRITE>Y</OVERWRITE>
<RETAIN_IDS>N</RETAIN_IDS>
<MIGRATION_CODE>migration_code</MIGRATION_CODE>
<OBJECTS TargetFolder="$FOLDER$">
<OBJECT Code="DQ0017" Type="120" INCLUDEDEPENDENCY="Y" /><!—object code and
object type of the migrated metadata object-->
</OBJECTS>
</OBJECTMIGRATION>
15.2.2 Sample
<OBJECTMIGRATION>
<USERID>user_ID</USERID><!--User ID-->
<LOCALE>en_US</LOCALE><!--Locale Information-->
<INFODOM>infodom_name</INFODOM><!--Information Domain-->
<FOLDER>$FOLDER$</FOLDER><!-- Folder/Segment -->
<MODE>IMPORT</MODE><!--EXPORT/IMPORT-->
<FILE>TEST_001</FILE>
<IMPORTALL TargetFolder="$FOLDER$">Y</IMPORTALL>
<FAILONERROR>Y</FAILONERROR>
<OVERWRITE>Y</OVERWRITE>
<RETAIN_IDS>N</RETAIN_IDS>
<MIGRATION_CODE>TESTAPIIMP02</MIGRATION_CODE>
<OBJECTS TargetFolder="$FOLDER$">
<OBJECT Code="DQ0017" Type="120" INCLUDEDEPENDENCY="Y" />
</OBJECTS><!—object code and object type of the migrated metadata object--
>
</OBJECTMIGRATION>>
16 References
This section of the document consists of information related to intermediate actions that needs to be
performed while completing a task. The procedures are common to all the sections and are referenced
where ever required. You can refer to the following sections based on your need.
16.1 Calendar
Calendar icon in the user interface helps you to specify a date in the DD/MM/YYYY format by
selecting from the pop-up calendar. You can select the specific month and year using the drop-down
lists. When you click the required date the details are auto updated in the date field.
performance implications, while defining task precedence in a batch apart from the logical or
functional reasons that primarily define the relative order in which they may be executed.
For example, consider a batch comprising of tasks in the following figure. The arrows show the
precedence involved. The way these tasks are selected for execution is as follows:
• Pick up all the tasks that have START as their parent. It essentially means that these tasks
(Task1, Task2, and Task6) can be run independently.
• Subsequently pick all tasks for execution (at that instance of time) which has successful parent
tasks.
• A Batch is marked as successful only if all the executable tasks are successful.
16.3.1 Architecture
The ES executes a component named "External Scheduler Interface Component" (ESIC) and passes
the suitable parameters. For more information about these parameters see ESIC Command Line
Parameters and Job Types. The ESIC in turn passes these requests to OFSAAI to fetch the Exit status
and interpret as per the Exit Status Specifications.
For more details of ESIC exit status, see Exit Status Specifications section. For other miscellaneous
information of ESIC, see Additional Information on ESIC section.
During the definition of a batch using the Batch Definition window of Operations module, the Batch is
called as EXTBATCH and the Information Domain in which this Batch is defined is called as
INFODOM. Hence INFODOM_EXTBATCH becomes the Batch ID.
Consider a scenario, to run the following tasks in this Batch.
• The first task 'Task1' loads data in a warehouse table FCT_CUSTOMER.
• The second task 'Task2' loads data in a warehouse table DIM_GEOGRAPHY.
• The third task 'Task3' is a Data Transformation, uses both the Tables mentioned above. Hence
this can run only if both the above tasks, Task1 and Task2 are complete.
• If either Task1 or Task2 fails, a new task namely Task 4 can be executed with the Data
Transformation which uses the data of the previous load.
• The final task is a task namely Task5 which is a Cube building task. This takes several hours as it
builds a Cube with many dimensions and hierarchies and holds large number of combinations.
The parameters for the Tasks are chosen from the drop-down choices provided. OFSAAI provides the
choices through its Data Model Management.
Since, the Task 3 or Task 5 is executed based on conditional success / failure of previous tasks, the
conditionality needs to be simulated in the ES. If the External Scheduler wants to control the
order/conditionality for tasks then it needs to be defined in such a way that they have the same
precedence. Here it would be ideal to define it as follows. The arrows in the following figure, shows the
precedence involved.
The export of such a Batch from OFSAAI would look like the following. For more information, see
OFSAAI Standard XML.
<BATCH BATCHID="INFODOM_EXTBATCH" NOOFTASKS="5" SYSTEMLOCALE="+5:30 GMT"
INFODOMAIN="INFODOM" REVUSER="OPERADMIN" DEFTYPE="DEF">
<RUNINFO REVUID="" EXTUID="" BATCHSTATUS="" INFODATE="" LAG=""/>
<TASK TASKID="Task1" COMPONENTID="LOAD DATA" TASKSTATUS="N" FILTER="N">
<PRECEDENCE>
<ONSUCCESSOF>
<TASKID/>
</ONSUCCESSOF>
<ONFAILUREOF>
<TASKID/>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
<TASK TASKID="Task2" COMPONENTID="CUBE CREATE" TASKSTATUS="N" FILTER="N">
<PRECEDENCE>
<ONSUCCESSOF>
<TASKID/>
</ONSUCCESSOF>
<ONFAILUREOF>
<TASKID/>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
<TASK TASKID="Task3" COMPONENTID="RUN EXECUTABLE" TASKSTATUS="N" FILTER="N">
<PRECEDENCE>
<ONSUCCESSOF>
<TASKID/>
</ONSUCCESSOF>
<ONFAILUREOF>
<TASKID/>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
<TASK TASKID="Task4" COMPONENTID="EXTRACT DATA" TASKSTATUS="N" FILTER="N">
<PRECEDENCE>
<ONSUCCESSOF>
<TASKID/>
</ONSUCCESSOF>
<ONFAILUREOF>
<TASKID/>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
<TASK TASKID="Task5" COMPONENTID=" TRANSFORM DATA" TASKSTATUS="N"
FILTER="N">
<PRECEDENCE>
<ONSUCCESSOF>
<TASKID/>
</ONSUCCESSOF>
<ONFAILUREOF>
<TASKID/>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
</BATCH>
Valid Values for Task Status are:
N Not Started
O On Going
F Failure
S Success
N Not Started
O On Going
R For Restart
C Complete
H Hold
K Exclude/Skip
N No Filter
When the definition of a Batch is exported and imported in ES, the Task Status, the Batch Status, and
the Filter become irrelevant. This happens if you export a specific run of a Batch, which is not currently
supported by OFSAAI. This should be included as a part of the XML for completeness.
After importing it in the ES, the Administrators can decide the order in which the tasks must be
executed and alter the order of execution without violating the precedence set in OFSAAI. For
example, the Administrator might configure it as in the following figure.
The invocation of ESIC by the ES and the command line parameters passed for each task for the
above configuration is as follows. For more information about command line parameters see ESIC
Command Line Parameters and Job Types.
The ES needs to provide the 'Ext Unique ID'. In this case it is MAESTRO_INFODOM_EXTBATCH
_20031001_1.
To Initialize the Batch Run:
esic -JI -Urevuser –Ppassword -RMAESTRO_INFODOM_EXTBATCH _20031001_1 -
IINFODOM –BEXTBATCH -D20031001 -F/tmp/INFODOM
Task 1:
esic -JXT -Urevuser –Ppassword -RMAESTRO_ INFODOM_EXTBATCH_20031001_1 -
IINFODOM -WC –TTask1
Task 2:
esic -JXT -Urevuser –Ppassword -RMAESTRO_ INFODOM_EXTBATCH_20031001_1 -
IINFODOM -WC –TTask2
Task 3:
esic -JXT -Urevuser –Ppassword -RMAESTRO_ INFODOM_EXTBATCH_20031001_1 -
IINFODOM -WC –TTask3
Task 4:
esic -JXT -Urevuser –Ppassword -RMAESTRO_ INFODOM_EXTBATCH_20031001_1 -
IINFODOM -WC –TTask4
Task 5:
esic -JXT -Urevuser –Ppassword -RMAESTRO_ INFODOM_EXTBATCH_20031001_1 -
IINFODOM -WC –TTask5
De-initialize:
esic -JD -Urevuser –Ppassword -RMAESTRO_ INFODOM_EXTBATCH_20031001_1 -
IINFODOM -BINFODOM_EXTBATCH -D20031001
Ensure the following scenarios while executing an ES Batch:
• Every Task executed in ES must have an equivalent task defined in a Batch within the
Operations module, except for specific tasks such as Initialization, De-initialization, and Status
Query / Alter Tasks.
• If ES requests to alter the status of a task that has already been requested for execution, an
error value is returned specific to such a case. The same hold good for Batch Run as well.
• Task Execution must follow the precedence as defined in OFSAAI. Else, the task execution
would result in failure.
• Re executing a task of a Batch run, which was successfully executed will result in failure.
• Execution of a Batch whose definition does not exist or deleted will result in failure. An error
value is returned specific to such a case.
• Execution of a task before the initialization of Batch will result in failure.
• Simultaneous execution of the same Task of a Batch Run will result in failure. The same holds
good for a Batch Run as well.
Component Description
Infodom The Information Domain for which the batch is being run.
Component Description
Run This indicates the number of times the Batch has been executed.
This value is incremented if the Batch is re run for the same MISDATE.
16.3.7 Advantages of ES
Following are the advantages of ES component:
• ES is capable of importing a Batch definition, which was previously exported in OFSAAI
Standard XML format. This eliminates the necessity to manually re-define the batch as per the
OFSAAI format.
• ES is capable of passing a unique id for a Batch Run to Operations module through an
initialization mechanism. For more information, see Batch Execution Mechanism.
• Every Batch run can be uniquely identified in both ES and Operations module, when tasks are
executed under the scope of a particular Batch Run.
• ES is capable of executing and passing the desired parameters to a Batch. Further it can fetch
an Exit status and interpret as per the Exit Status Specifications.
<TASKID/>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
<TASK TASKID="Task2" COMPONENTID="RUN EXECUTABLE" TASKSTATUS="O" FILTER="H">
<PRECEDENCE>
<ONSUCCESSOF>
<TASKID></TASKID>
</ONSUCCESSOF>
<ONFAILUREOF>
<TASKID></TASKID>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
<TASK TASKID="Task3" COMPONENTID="EXTRACT DATA" TASKSTATUS="O" FILTER="N">
<PRECEDENCE>
<ONSUCCESSOF>
<TASKID>TASK1</TASKID>
</ONSUCCESSOF>
<ONFAILUREOF>
<TASKID>Task2</TASKID>
</ONFAILUREOF>
</PRECEDENCE>
</TASK>
</BATCH>
The valid values for FILTER are:
H Hold
R Released
E Excluded/Skipped
I Included
0 Success
-1 Failure
5-8 Reserved
16.3.10.1 Prerequisites
• JAVA_HOME (Required) must point at JAVA bin installation directory.
• Ensure the NAWK command is available under PATH.
Contact system administrator if the NAWK command does not exist.
Example: # yum install nawk
• ES_HOME (Required) must point to the ES Home folder.
• Copy the ES folder and the following jars should be present in ES/lib folder:
FICServer.jar
AESCryptor.jar
aai-client.jar
• Update ES/conf/<Infodom>.ini file and specify the proper values.
MISDATE=Information Date in format mm-dd-yyyy (For example: MISDATE=01-31-2010)
USERNAME=OFSAAI Login user (For example: USERNAME=BASELUSER)
User ID Enter the User ID used for initializing the Batch execution.
Password Enter the password for initializing the Batch execution. This password is
validated against the V_PASSWORD column in the
CSSMS_USR_PROFILE table.
An encrypted password is expected, so if the password is given as clear
text, a warning message is displayed, but it proceeds further for
validation.
Ext Unique ID Enter a unique ID against a batch execution. It is the responsibility of the
External Scheduler/calling program to supply the unique id to ESIC.
The value of this against OFSAAI batch execution id mapping is stored in
the table EXT_BATCH_RUN_ID_MAPPING.
Info Dom Enter the information domain against which the batch is getting
executed.
Temp Directory Name This can be any value chosen by the user.
16.3.11.5 S – Finalize the Batch execution – primarily mark the Batch run as
complete
-JSB –U<User ID> -P<Password> -R<Ext Unique ID> -I<Info Dom> -V<Batch
Status>
Valid Values for Batch Status are:
C - Complete
<File Name> contains the complete file name that would be created overwriting any file that exists
with the same name.
<Message Format String> specifies the information that needs to be logged.
Format string can contain parameters that will be replaced with actual values from logs.
Valid values for message parameter are msgid, brid, taskid, component, tstatus, severity, tstamp, and
sysmsg.
Each parameter, when passed in a message format string should be enclosed within {}.
Example:
A typical message format string would look like:
{msgid}\t{brid}\t{taskid}\t{component}\t{tstatus}\t{severity}\t{tstamp}\t{sy
smsg}
If no message format string is supplied, then the log generated will be in the above format, with each
value separated by a tab.
• If the wait mode is selected as C, the command waits for the completion of the task/batch
execution and returns the values.
NOTE <External Unique ID> and <Task ID> can be used wherever
applicable.
17 Preferences
The preferences section enables you to set your OFSAA Home Page and the Date Format in which all
Date fields should be displayed, throughout the application where OJET screens are used. This is the
configuration to set the Date Format at user level.
To set the user preferences:
1. Click the logged in user name and select Preferences from the drop-down menu. The
Preferences window is displayed.
2. Select the application which you want to display as your Home Page from the Set My Home
Page drop-down list.
NOTE Whenever you install a new application, the related value for
that application is found in the drop-down list.
3. Select the required Date Format in which the Date fields in all OJET screens in your application
to be displayed. The options are dd/MM/yyyy and MM/dd/yyyy.
4. Click Save to save your preference.
18 Appendix A
Business Administrator User mapped to this group will have access to all the menu items and actions
for advanced operations of metadata objects.
Business Authorizer User mapped to this group will have access to all the menu items and actions
for authorization of changes to metadata objects.
Business Owner User mapped to this group will have access to all the menu items and actions
for read and write of metadata objects
Business User User mapped to this group will have access to all the menu items and actions
for access and read of metadata objects.
Guest User mapped to this group will have access to certain menu items with only
access privileges.
Identity Administrator User mapped to this group will have access to all the menu items for
managing User entitlements, User Group Entitlements and Access
Management configurations.
Identity Authorizer User mapped to this group will have access to all the menu items for
authorizing User entitlements, User Group Entitlements and Access
Management configurations.
Object Administrator User mapped to this group will have access to all menu items for managing
object migration and metadata traceability using metadata browser.
System Administrator User mapped to this group will have access to all menu items for managing
the setup configurations.
WorkFlow Delegation Admin User mapped to this group will have access to workflow delegation.
DATASECURITYADMIN Data Security Admin Data security admin role for executing redaction policies
DEFQMAN DEFQ Manager Data Entry Forma and Query Manager Role
DMACCESS Data Mapping UI User Group mapped will have access to Link and
Access Summary
DMAUTH Data Mapping User Group mapped will have access to authorize the
Authorize Data Mapping
DMREAD Data Mapping Read User Group mapped will have access to View Definition.
Only
DMWRITE Data Mapping Write User Group mapped will have access to add, edit, copy
and delete PLC.
PLCACCESS PLC Access User Group mapped will have access to Link and
Summary
PLCAUTH PLC Authorize User Group mapped will have access to authorize the
PLC
PLCREAD PLC Read Only User Group mapped will have access to View Definition.
PLCWRITE PLC Write User Group mapped will have access to add, edit, copy
and delete PLC.
SCDACCESS SCD Access User Group mapped will have access to SCD Link and
Summary
SCDAUTH SCD Authorize User Group mapped will have access to authorize the
SCD
SCDREAD SCD Read Only User Group mapped will have access to View SCD
SCDWRITE SCD Write User Group mapped will have access to add, edit, copy
and delete SCD.
SRCACCESS Data Source Access User Group mapped will have access to Link and
Summary
SRCAUTH Data Source Authorize User Group mapped will have access to authorize the
Data Source
SRCREAD Data Source Read User Group mapped will have access to View Definition.
Only
SRCWRITE Data Source Write User Group mapped will have access to add, edit, copy
and delete Data Source.
UDFACCESS UDF Access User Group mapped will have access to UDF Link and
Summary
UDFAUTH UDF Authorize User Group mapped will have access to authorize the
UDF
UDFREAD UDF Read Only User Group mapped will have access to View UDF.
UDFWRITE UDF Write User Group mapped will have access to add, edit, copy
and delete UDF.
XLCNFADVNC Config excel advanced Configuration schema excel upload and download
access
ADAPTERS Run Adapters The user mapped to this function will have rights to run
reveleus adapters
ADDMRE Add Manage Run The user mapped to this function can add the request
for run execution
ADDPROCESS Add Process tree The user mapped to this function can add the process
tree
ADDRULE Add Rule The user mapped to this function can add the rules
ADDRUN Add Run The user mapped to this function can add the run
ADD_F_KBD Add Flexible KBD The user mapped to this function can add Flexible KBD
ADD_RESTR Add Restructure The user mapped to this function can add Restructure
ADD_WF Add Workflow and The user mapped to this function can Create New
Process Definitions Workflow and Process definitions
ADMINSCR Administration Screen The user mapped to this function can access the
Administration Screen
ADVDRLTHR Access to Advanced drill The User mapped to this function will have access to
thru Advanced Drill thru
ALDADD Add Cube The user mapped to this function can add cubes
ALDATH Authorize Cube The user mapped to this function can authorize cubes
ALDDEL Delete Cube The user mapped to this function will have rights to
delete cubes
ALDMOD Modify Cube The user mapped to this function can modify cubes
ALDVIW View Cube The user mapped to this function can view cubes
ALSADD Add Alias The user mapped to this function can add Alias
ALSATH Authorize Alias The user mapped to this function can authorize Alias
ALSDEL Delete Alias The user mapped to this function will have rights to
delete Alias
ALSMOD Modify Alias The user mapped to this function can modify Alias
ALSVIW View Alias The user mapped to this function can view Alias
APPSRVR Application Server The user mapped to this function can access the
Screen Application Server Screen
ARCPROCES Archive Process The user mapped to this function can archive the
process tree
ARCRULE Archive Rule The user mapped to this function can archive the Rule
ARCRUN Archive Run The user mapped to this function can archive the Run
ATHPROCESS Authorize Process Tree The user mapped to this function can authorize Process
Tree
ATHRULE Authorize Rule The user mapped to this function can authorize the rule
ATHRUN Authorize Run The user mapped to this function can authorize run
ATH_F_KBD Authorize Flexible KBD The user mapped to this function can authorize Flexible
KBD
AUDTR Audit Trail Report This function displays Report for audit summary
AUD_TRL Audit Trail Report The user mapped to this function can access the Audit
Screen Trail Report Screen
AUTH_MAP Authorize Map(s) The user mapped to this function can AUTHORIZE Map
definitions
AUTH_SCR Metadata Authorize The user mapped to this function can see Authorization
Screen Screen
AUTH_WF Authorize Access to The user mapped to this function can Authorize the
Workflow and Process Workflow and Process Definition
BATCHMAINT Batch Maintenance Link The user mapped to this function can access Batch
Maintenance Link
BATCHEXEC Batch Execution Link The user mapped to this function can access Batch
Execution Link
BATCHMON Batch Monitor Link The user mapped to this function can access Batch
Monitor Link
BATCHSCHLD Batch Scheduler Link The user mapped to this function can access Batch
Scheduler Link
BATCHVLOG View Log Link The user mapped to this function can access View Log
Link
BATCHCNCL Batch Cancel Link The user mapped to this function can access Batch
Cancel Link
BATCHREP Batch Processing The user mapped to this function can access Batch
Report Link Processing Report Link
BATPRO Batch Processing The user mapped to this function will have rights to
process batch
BPROCADD Add Business Processor The user mapped to this function can add business
processors
BPROCATH Authorize Business The user mapped to this function can authorize business
Processor processors
BPROCDEL Delete Business The user mapped to this function can delete business
Processor processors
BPROCMOD Modify Business The user mapped to this function can modify business
Processor processors
BPROCVIW View Business The user mapped to this function can view business
Processor processors
CATIGNACC Ignore Catalog Access This function gives access to ignore a Catalog access.
CATIGNLCK Ignore Catalog Lock This function gives access to ignore a Catalog lock.
CATLAT Latest Catalog This function gives access to make a Catalog latest.
CATLINK Catalog Link This Function gives user access to the LHS link.
CATSUM Catalog Summary This function gives Summary Page access to the
mapped user.
CFEDEF Cash Flow Equation The user mapped to this function can view/add the
Definition Cash Flow Equation definitions
CFG Configuration The user mapped to this function will have access to
configuration details
CMPPROCESS Compare Process The user mapped to this function can compare the
process tree
CMPRULE Compare Rule The user mapped to this function can compare the rules
CMPRUN Compare Run The user mapped to this function can compare the run
CONFXLADMN Config ExcelUpload The user mapped to this funciton can upload data to
Config schema tables
CPYPROCESS Copy Process Tree The user mapped to this function can copy Process Tree
CPYRULE Copy Rule The user mapped to this function can copy Rule
CPYRUN Copy Run The user mapped to this function can copy Run
CRTMAPADV Create Map Advanced The user mapped to this function will have rights to the
advanced options of map maintenance
CRT_MAP Create Map The user mapped to this function can CREATE/SAVEAS
Map definitions
CWSDOCMGMT Document The user mapped to this function can use Document
Management Access Management APIS via Callable Services Framework
CWSEXTWSAS Call Remote Web The user mapped to this function can call web services
Services configured in the Callable Services Framework
CWSHIERRFR Refresh Hierarchies The user mapped to this function can refresh hierarchies
through the Callable Services Framework
CWSPR2ACCS Execute Runs - Rules The user mapped to this function can execute runs and
rules through the Callable Services Framework
CWSSMSACCS Remote SMS Access The user mapped to this function can access SMS apis
through the Callable Services Framework
CWSUMMACCS Remote UMM Access The user mapped to this function can access UMM apis
through the Callable Services Framework
CWS_STATUS Result of request - The user mapped to this function can access requests
Status of all status through the Callable Services Framework
CWS_TRAN Result of own request The user mapped to the function can access own
only requests status using Callable Services Framework
DATADD Add Dataset The user mapped to this function can add datasets
DATASECADV Data Security Advanced Function to execute the redaction policy batch
DATATH Authorize Dataset The user mapped to this function can authorize datasets
DATDEL Delete Dataset The user mapped to this function will have rights to
delete datasets.
DATMOD Modify Dataset The user mapped to this function can modify datasets.
DATVIW View Dataset The user mapped to this function can view datasets.
DBD Database Details The user mapped to this function will have access to
database details.
DBS Database Server The user mapped to this function will have access to
Database Server details.
DCLSADD Add Data Cluster This function gives access to add a Data Cluster
DCLSCOPY Copy Data Cluster This function gives access to copy a Data Cluster
DCLSEDIT Edit PData Cluster This function gives access to edit a Data Cluster
DCLSPURGE Purge Data Cluster This function gives access to purge a Data Cluster
DCLSVIEW View Data Cluster This function gives access to view a Data Cluster
DEEADD Add Derived Entities The user mapped to this function can add derived
entities.
DEEATH Authorize Derived The user mapped to this function can authorize derived
Entities entities.
DEEDEL Delete Derived Entities The user mapped to this function can delete derived
entities.
DEEMOD Modify Derived Entities The user mapped to this function can modify derived
entities.
DEEVIW View Derived Entities The user mapped to this function can view derived
entities
DEFADM Defi Administrator The user mapped to this function will have Defi
Administration rights
DEFAUTH Forms Autorization The user mapped to this function will have rights to
authorize the DEFQ forms
DEFQADM Defq Administrator The user mapped to this function will have Defi
Administration rights
DEFQUSR Defq User The user mapped to this function will have Defi user
rights
DEFUSR Defi User The user mapped to this function will have Defi user
rights
DELPROCESS Delete Process The user mapped to this function can the process
DELRULE Delete Rule The user mapped to this function can delete the rules
DELRUN Delete Run The user mapped to this function can delete the run
DEL_MAP Delete Map The user mapped to this function can DELETE Map
definitions
DEL_WF Delete Workflow and The user mapped to this function can Delete Workflow
Process Definitions and Process definitions.
DIMADD Add Dimension The user mapped to this function can add dimensions.
DIMATH Authorize Dimension The user mapped to this function can authorize
dimensions.
DIMDEL Delete Dimension The user mapped to this function will have rights to
delete dimensions.
DIMMOD Modify Dimension The user mapped to this function can modify
dimensions
DIMVIW View Dimension The user mapped to this function can view dimensions
DMADD Add Data Mapping This function gives access to add a Data Mapping
DMAUTH Authorize Data This function gives access to authorize a Data Mapping
Mapping
DMCONFEDIT Data Management This Function gives user access to add/edit a DMT
Configuration Edit Configuration Property.
DMCONFSUMM Data Management This Function gives user access to the DMT
Configuration Configuration Summary.
DMCOPY Copy Data Mapping This function gives access to copy a Data Mapping
DMDEL Delete Data Mapping This function gives access to delete a Data Mapping
DMEDIT Edit PData Mapping This function gives access to edit a Data Mapping
DMLAT Make Latest Data This function gives access to make latest a Data
Mapping Mapping
DMMFILEUPL Model Xml Upload The user mapped to this function can upload erwin
Model File for Model Upload
DMPURGE Purge Data Mapping This function gives access to purge a Data Mapping
DMSUMM Data Mapping This Function gives user access to the Data Mapping
Summary Summary and LHS Link.
DMTDFM Data File Mapping The user mapped to this function can access the Data
Screen File Mapping Screen
DMTDM Data Mapping Screen The user mapped to this function can access the Data
Mapping Screen
DMTSRC Data Sources Screen The user mapped to this function can access the Data
Sources Screen
DMTUDF UDF Screen The user mapped to this function can access the UDF
Screen
DMVIEW View Data Mapping This function gives access to view a Data Mapping
DMVIEWSQL View SQL Data Mapping This function gives access to view/validate a Data
Mapping/File Mapping SQL
DPPDEL Delete DMT This function gives access to delete a DMT Performance
Performance Params Parameters
DPPEDIT Edit DMT Performance This function gives access to edit a DMT Performance
Params Parameters
DQLADD Data Quality Add This function is for Data Quality Map applet
DQ_ADD Data Quality Add Rule The user mapped to this function can add DQ Rule
DQ_AUTH Data Quality The user mapped to this function can authorise DQ Rule
Authorisation Rule
DQ_CPY Data Quality Copy Rule The user mapped to this function can copy DQ Rule
DQ_DEL Data Quality Delete Rule The user mapped to this function can delete DQ Rule
DQ_EDT Data Quality Edit Rule The user mapped to this function can edit DQ Rule
DQ_GP_ADD Data Quality Add Rule The user mapped to this function can add DQ Rule
Group Group
DQ_GP_CPY Data Quality Copy Rule The user mapped to this function can copy DQ Rule
Group Group
DQ_GP_DEL Data Quality Delete Rule The user mapped to this function can delete DQ Rule
Group Group
DQ_GP_EDT Data Quality Edit Rule The user mapped to this function can edit DQ Rule
Group Group
DQ_GP_EXEC Data Quality Execute The user mapped to this function can execute DQ Rule
Rule Group Group
DQ_GP_VIW Data Quality View Rule The user mapped to this function can view DQ Rule
Group Group
DQ_LNK_ACC Data Quality Link The user mapped to this function can access the DQ
Access Links
DQ_QRY_VIW Data Quality View The user mapped to this function can generate the rule
Query query and view the generated query.
DQ_SUMM Data Quality Summary The user mapped to this function can access the DQ
Access Summary Pages.
DQ_VIW Data Quality View Rule The user mapped to this function can view DQ Rule.
EDIT_WF Edit Workflow and The user mapped to this function can Edit Workflow and
Process Definitions Process definitions.
ENABLEUSR Enable User Screen The user mapped to this function can access the Enable
User Screen.
EXEC_RESTR Execute Restructure The user mapped to this function can execute
Restructure Process
EXEPROCESS Exexute Process The user mapped to this function can execute process
tree
EXERULE Exexute Rule The user mapped to this function can execute rules
EXERUN Exexute Run The user mapped to this function can execute run
EXPMD Export Metadata The user mapped to this function can Export Metadata
EXTPROCESS Export Process The user mapped to this function can export process
tree
EXTRULE Export Rule The user mapped to this function can export Rule
EXTRUN Export Run The user mapped to this function can export Run
FILTERRULE Filters in Rule The user mapped to this function can apply filters to the
rules
FRMMGR Forms Manager The user mapped to this function can use Forms
Manager
FUNCMAINT Function Maintenance The user mapped to this function can access the
Screen Function Maintenance Screen
FUNCROLE Function Role Map The user mapped to this function can access the
Screen Function Role Map Screen
FU_ATR_ADD Fusion Add Attributes The user mapped to this function can Create New
Attributes
FU_ATR_CPY Fusion Copy Attributes The user mapped to this function can Copy Attributes
FU_ATR_DD Fusion Attributes - View The user mapped to this function can View Dependent
Dependent Data Data for Attributes
FU_ATR_DEL Fusion Delete Attributes The user mapped to this function can Delete Attributes
FU_ATR_EDT Fusion Edit Attributes The user mapped to this function can Edit Attributes
FU_ATR_HP Fusion Attribute Home The user mapped to this function can view Attribute
Page Home Page
FU_ATR_VIW Fusion View Attributes The user mapped to this function can View Attributes
FU_EXP_ADD Fusion Add Expressions The user mapped to this function can Create New
Expressions
FU_EXP_CPY Fusion Copy The user mapped to this function can Copy Expressions
Expressions
FU_EXP_DD Fusion View The user mapped to this function can View Dependent
Dependency Data for Expressions
Expressions
FU_EXP_DEL Fusion Delete The user mapped to this function can Delete
Expressions Expressions
FU_EXP_EDT Fusion Edit Expressions The user mapped to this function can Edit Expressions
FU_EXP_HP Fusion Expns Home The user mapped to this function can view Expressions
Page Home Page
FU_EXP_IGN Fusion Expression The user mapped to this function can ignore the access
Ignore Access type for Expression
FU_EXP_LNK Fusion Expressions Link The user mapped to this function can view Expression
Summary Page in LHS Menu
FU_EXP_VIW Fusion View The user mapped to this function can View Expressions
Expressions
FU_FIL_ADD Fusion Add Filters The user mapped to this function can Create New Filters
FU_FIL_CPY Fusion Copy Filters The user mapped to this function can Copy Filters
FU_FIL_DD Fusion Filters - View The user mapped to this function can View Dependent
Dependent Data Data for Filters
FU_FIL_DEL Fusion Delete Filters The user mapped to this function can Delete Filters
FU_FIL_EDT Fusion Edit Filters The user mapped to this function can Edit Filters
FU_FIL_HP Fusion Filters Home The user mapped to this function can view Filters Home
Page Page
FU_FIL_IGN Fusion Filters Ignore The user mapped to this function can ignore the access
Access type for Filters
FU_FIL_LNK Fusion Filters Link The user mapped to this function can access Fusion
Filters Summary Link
FU_FIL_SQL Fusion Filters - View The user mapped to this function can view SQL for
SQL Filters
FU_FIL_VIW Fusion View Filters The user mapped to this function can View Filters
FU_GP_VIW Global Preferences View The user mapped to this function can view Global
Preferences
FU_HBR_ADD Fusion Hier Browser The user mapped to this function can add member in
Add AMHM Hierarchy Browser
FU_HBR_DEL Fusion Hier Browser The user mapped to this function can delete member in
Delete AMHM Hierarchy Browser
FU_HBR_EDT Fusion Hier Browser The user mapped to this function can edit in AMHM
Edit Hierarchy Browser
FU_HBR_SMY Fusion Hier Browser The user mapped to this function can use shared folder
Summary in AMHM Hierarchy Browser
FU_HIE_ADD Fusion Add Hierarchies The user mapped to this function can Create New
Hierarchies
FU_HIE_CPY Fusion Copy Hierarchies The user mapped to this function can Copy Hierarchies
FU_HIE_DD Fusion Hierarchies - The user mapped to this function can View Dependent
View Dependent Data Data for Hierarchies
FU_HIE_DEL Fusion Delete The user mapped to this function can Delete Hierarchies
Hierarchies
FU_HIE_EDT Fusion Edit Hierarchies The user mapped to this function can Edit Hierarchies
FU_HIE_HP Fusion Hierarchy Home The user mapped to this function can view Hierarchy
Page Home Page
FU_HIE_IGN Fusion Hierarchy Ignore The user mapped to this function can ignore the access
Access type for Hierarchies
FU_HIE_LNK Fusion Hierarchy Link The user mapped to this function can view Hierarchy
Summary Page Link in LHS Menu
FU_HIE_UMM Fusion Hierarchies to The user mapped to this function can Map Fusion
UMM Mapping Hierarchies to UMM Hierarchies
FU_HIE_VIW Fusion View Hierarchies The user mapped to this function can View Hierarchies
FU_MEM_ADD Fusion Add Members The user mapped to this function can Create New
Members
FU_MEM_CPY Fusion Copy Members The user mapped to this function can Copy Members
FU_MEM_DD Fusion Members - View The user mapped to this function can View Dependent
Dependent Data Data for Members
FU_MEM_DEL Fusion Delete Members The user mapped to this function can Delete Members
FU_MEM_EDT Fusion Edit Members The user mapped to this function can Edit Members
FU_MEM_HP Fusion Member Home The user mapped to this function can view Member
Page Home Page
FU_MEM_VIW Fusion View Members The user mapped to this function can View Members
FU_MIG_ADD Object Migration Create The user mapped to this function can Create Migration
Migration Ruleset Ruleset
FU_MIG_CFG Object Migration Source The user mapped to this function can manipulate Source
Configuration Configuration
FU_MIG_CPY Object Migration Copy The user mapped to this function can Object Migration
Migration Ruleset Edit Migration RulesetCopy Migration Ruleset
FU_MIG_CRN Cancel Migration The user mapped to this function can Cancel migration
Execution execution
FU_MIG_DEL Object Migration Delete The user mapped to this function can Delete Migration
Migration Ruleset Ruleset
FU_MIG_EDT Object Migration Edit The user mapped to this function can Edit Migration
Migration Ruleset Ruleset
FU_MIG_HP Object Migration Home The user mapped to this function can Object Migration
Page Link
FU_MIG_RUN Execute/Run Migration The user mapped to this function can Run the migration
Process process
FU_MIG_SUM Object Migration The user mapped to this function can view ruleset
Summary Page summary
FU_MIG_VCF Object Migration The user mapped to this function can view Source
ViewSource Configuration
Configuration
FU_MIG_VIW Object Migration View The user mapped to this function can View Migration
Migration Ruleset Ruleset
FU_SQL_ADD SQL Rule Add This function is for SQL Rule Add
FU_SQL_CPY SQL Rule Copy This function is for SQL Rule Copy
FU_SQL_DEL SQL Rule Delete This function is for SQL Rule Delete
FU_SQL_EDT SQL Rule Edit This function is for SQL Rule Edit
FU_SQL_RUN SQL Rule Run This function is for SQL Rule Run
FU_SQL_VIW SQL Rule View This function is for SQL Rule View
F_KBD_LINK Flexible KBD Link The user mapped to this function can see the Flexible
KBD Link
F_KBD_SUM Flexible KBD Summary The user mapped to this function can view summary of
Flexible KBD
GMVDEF GMV Definition The user mapped to this function can view/add the
General Market Variable definitions
HCYADD Add Hierarchy The user mapped to this function can add hierarchies
HCYATH Authorize Hierarchy The user mapped to this function can authorize
hierarchies
HCYDEL Delete Hierarchy The user mapped to this function will have rights to
delete hierarchies
HCYMOD Modify Hierarchy The user mapped to this function can modify hierarchies
HCYVIW View Hierarchy The user mapped to this function can view hierarchies
HOLMAINT Holiday Maintenance The user mapped to this function can access the Holiday
Screen Maintenance Screen
IBMADD Import Business Model The user mapped to this function can import business
models
IMPMD Import Metadata The user mapped to this function can Import Metadata
INBOXLINK Link Access to Inbox The user mapped to this function can open Inbox
IND Information Domain The user mapped to this function will have access to
information domain details
LCKPROCESS Lock Process The user mapped to this function can lock process tree
LCKRULE Lock Rule The user mapped to this function can lock rules
LCKRUN Lock Run The user mapped to this function can lock run
LCK_F_KBD Lock Flexible KBD The user mapped to this function can lock Flexible KBD
LCK_RESTR Lock Restructure The user mapped to this function can lock Restructure
LINK_WF Link Access to Workflow The user mapped to this function can See the Workflow
and Process Definitions and Process Orachestration Link
LOCDESC Locale Desc Upload The user mapped to this function can access the Locale
Screen Desc Upload Screen
MAN_WF_M Manage Workflow and The user mapped to this function can Manage Workflow
Process Monitor and Process Monitor
MDDIFF Metadata Difference The user mapped to this function can access the
Screen Metadata Difference Screen
MDLAUTH Model Authorize The user mapped to this function can Authorize Model
Maintenance
MDLCALIB Model Calibration The user mapped to this function can view/add the
Model Calibration screen
MDLCHAMP Model Make Champion The user mapped to this function can view the
Champion Challenger screen
MDLDEF Model Definition The user mapped to this function can view/add the
Model definitions
MDLDEPLOY Model Deployment The user mapped to this function can access the Model
Deployment screen
MDLEXEC Model Execution The user mapped to this function can access the Model
Execution screen
MDLOUTPUT Model Outputs The user mapped to this function can view the Model
Outputs
MDMP Metadata Segment Map The user mapped to this function will have rights to
perform metadata segment mapping
METMAP Map Metadata The user mapped to this function can Map Metadata to
Application
METPUB Metadata Publish The user mapped to this function can publish metadata
METVIW View Metadata The user mapped to this function can access metadata
browser
MLPROCESS Make Latest Process The user mapped to this function can make latest
Process
MLRULE Make Latest Rule The user mapped to this function can make latest rule
MLRUN Make Latest Run The user mapped to this function can make latest run
MODMRE Modify Manage Run The user mapped to this function can modify the
request for run execution
MODPROCESS Modify Process Tree The user mapped to this function can modify Process
Tree
MODRULE Modify Rule The user mapped to this function can modify the rules
MODRUN Modify Run The user mapped to this function can modify run
MOD_F_KBD Edit Flexible KBD The user mapped to this function can edit Flexible KBD
MOD_MAP Modify Map The user mapped to this function can SAVE Map
definitions
MOD_RESTR Edit Restructure The user mapped to this function can edit Restructure
MRELINK Manage Run Link The user mapped to this function can view the manage
run link
MRESUM Manage Run Summary The user mapped to this function can view the manage
run summary
MSRADD Add Measure The user mapped to this function can add measures
MSRATH Authorize Measure The user mapped to this function can authorize
measures
MSRDEL Delete Measure The user mapped to this function will have rights to
delete measures
MSRMOD Modify Measure The user mapped to this function can modify measures
MSRVIW View Measure The user mapped to this function can view measures
OBJMGR_EXP Export Objects The user mapped to this function can Export Objects
OBJMGR_IMP Import Objects The user mapped to this function can Import Objects
OFSAAAI FS Enterprise Modeling The user mapped to this function can access Financial
Access Code Services Enterprise Modeling Application
OFSIPE FS Inline Processing The user mapped to this function can access Financial
Engine Access Code Services Inline Processing Engine Application
OJFFLINK Access to OJET Forms The user mapped to this function can access OJET
Framework Forms Framework
OJFF_MASK Access to OJET Forms The user mapped to this function can access OJET
Framework Masking Forms Framework Masking Screen
OLAPDETS OLAP Details Screen The user mapped to this function can access the OLAP
Details Screen
OM_EX_ADD Add Export Definitions The user mapped to this function can add export
definitions
OM_EX_COPY Copy Export Definitions The user mapped to this function can copy export
definitions
OM_EX_DLTE Delete Export The user mapped to this function can delete export
Definitions definitions
OM_EX_EDIT Edit Export Definitions The user mapped to this function can edit export
definitions
OM_EX_TRGR Trigger Export The user mapped to this function can trigger export
Definitions definitions
OM_EX_VIEW View Export Definitions The user mapped to this function can view export
definitions
OM_IM_ADD Add Import Definitions The user mapped to this function can add import
definitions
OM_IM_COPY Copy Import Definitions The user mapped to this function can copy import
definitions
OM_IM_DLTE Delete Import The user mapped to this function can delete import
Definitions definitions
OM_IM_EDIT Edit Import Definitions The user mapped to this function can edit import
definitions
OM_IM_TRGR Trigger Import The user mapped to this function can trigger import
Definitions definitions
OM_IM_VIEW View Import Definitions The user mapped to this function can view import
definitions
OPRABORT Batch Abort The user mapped to this function can Abort Batch
OPRADD Create Batch The user mapped to this function will have rights to
define batches
OPRCANCEL Batch Cancellation The user mapped to this function can Cancel Batch
OPRDEL Delete Batch The user mapped to this function will have rights to
delete batches
OPREXEC Execute Batch The user mapped to this function will have rights to run,
restart and rerun batches
OPRLINK Batch Link This function gives access to the LHS Link for
Operations.
OPRMON Batch Monitor The user mapped to this function will have rights to
monitor batches
OPRSCHEDUL Schedule Batch The user mapped to this function can schedule batches
ORACBADD Add Oracle Cube The user mapped to this function can add Oracle cubes
ORACBATH Authorize Oracle Cube The user mapped to this function can authorize Oracle
cubes
ORACBDEL Delete Oracle Cube The user mapped to this function will have rights to
delete Oracle cubes
ORACBMOD Modify Oracle Cube The user mapped to this function can modify Oracle
cubes
ORACBVIW View Oracle Cube The user mapped to this function can view Oracle cubes
PATCHINFO View Patch Information The user mapped to this function can view list of all
fixes/ patches applied
PBLPROCESS Publish Process The user mapped to this function can publish the
process tree
PBLRULE Publish Rule The user mapped to this function can publish the rules
PBLRUN Publish Run The user mapped to this function can publish the run
PLCADD Add Post Load Changes This function gives access to add a PLC
PLCAUTH Authorize Post Load This function gives access to authorize a PLC
Changes
PLCCOPY Copy Post Load This function gives access to copy a PLC
Changes
PLCDEL Delete Post Load This function gives access to delete a PLC
Changes
PLCEDIT Edit Post Load Changes This function gives access to edit a PLC
PLCGENLOG Generate DT Logic This function gives access to Generate the DT Logic
PLCLAT Make Latest Post Load This function gives access to make latest a PLC
Changes
PLCPURGE Purge Post Load This function gives access to purge a PLC
Changes
PLCSUMM PLC Summary This Function gives user access to the PLC Summary.
PLCVIEW View Post Load This function gives access to view a PLC
Changes
PR2SCREEN PR2 Screens The user mapped to this function can access PR2
screens
PRGPROCESS Purge Process The user mapped to this function can purge the process
tree
PRGRULE Purge Rule The user mapped to this function can purge the rules
PRGRUN Purge Run The user mapped to this function can purge the run
PROFMAINT Profile Maintenance The user mapped to this function can access the Profile
Screen Maintenance Screen
PTIGNACC Process Ignore Access If Mapped the user will be able to add or remove access
type restrictions on process object
PTIGNLCK Process Ignore Lock If mapped the user will be able to add of remove lock on
process object
PTLINK Process Link The user mapped to this function can view the process
link
PTSUM Process Summary The user mapped to this function can view the process
summary
RESTPASS Restricted Passwords The user mapped to this function can access the
Screen Restricted Passwords Screen
RESTR_LINK Restructure Link The user mapped to this function can see the
Restructure Link
RESTR_SUM Restructure Summary The user mapped to this function can view summary of
Restructure
RLIGNACC Rule Ignore Access If Mapped the user will be able to add or remove access
type restrictions on rule object
RLIGNLCK Rule Ignore Lock If mapped the user will be able to add of remove lock on
rule object
RLLINK Rule Link The user mapped to this function can view the rule link
RLSETCFG Rules Setup The user mapped to this function can access the Rules
Configuration Screen Setup Configuration Screen
RLSUM Rule Summary The user mapped to this function can view the rule
summary
RNIGNACC Run Ignore Access If Mapped the user will be able to add or remove access
type restrictions on run object
RNIGNLCK Run Ignore Lock If mapped the user will be able to add of remove lock on
run object
RNLINK Run Link The user mapped to this function can view the run link
RNSUM Run Summary The user mapped to this function can view the run
summary
ROLEMAINT Role Maintenance The user mapped to this function can access the Role
Screen Maintenance Screen
RRFSCREEN Rules Framework The user mapped to this function can access Rules
Screens Framework screens
RSTPROCESS Restore Process The user mapped to this function can restore the
process tree
RSTRULE Restore Rule The user mapped to this function can restore the Rule
RSTRUN Restore Run The user mapped to this function can restore the Run
SANDBXAUTH Sandbox Authorize The user mapped to this function can Authorize a
Sandbox Maintenance
SANDBXCR Sandbox Creation The user mapped to this function can view/add the
Sandbox definitions
SANDBXMOD Sandbox Maintenance The user mapped to this function can view the Sandbox
Maintenance
SAVEMD Save Metadata Screen The user mapped to this function can access the Save
Metadata Screen
SCDADD Add SCD This function gives access to add a Slowly Changing
Dimension
SCDCOPY Copy SCD This function gives access to copy a Slowly Changing
Dimension
SCDDEL Delete SCD This function gives access to delete a Slowly Changing
Dimension
SCDEDIT Edit SCD This function gives access to edit a Slowly Changing
Dimension
SCDLAT Make Latest SCD This function gives access to make latest a User Defined
Function
SCDPURGE Purge SCD This function gives access to purge a Slowly Changing
Dimension
SCDSUMM SCD Summary This Function gives user access to the Slowly Changing
Dimension Summary
SCDVIEW View SCD This function gives access to view a Slowly Changing
Dimension
SCNDEF Scenario Definition The user mapped to this function can define the
scenarios
SCROPC Operator Console The user mapped to this function will have access to the
operator console
SCRSAU System Administrator The user mapped to this function can access system
Screen administrator screens
SCR_MDB MDB Screen The user mapped to this function can access the MDB
screen
SEGMAINT Segment Maintenance The user mapped to this function can access the
Screen Segment Maintenance Screen
SRCADD Add Data Source This function gives access to add a Data Source
SRCAUTH Authorize Data Source This function gives access to authorize a Data Source
SRCCOPY Copy Data Source This function gives access to copy a Data Source
SRCDEL Delete Data Source This function gives access to delete Data Source
SRCEDIT Edit Data Source This function gives access to edit a Data Source
SRCLAT Make Latest Data This function gives access to make latest a Data Source
Source
SRCPURGE Purge Data Source This function gives access to purge a Data Source
SRCSUMM Source Summary This Function gives user access to the Data Source
Summary
SRCVIEW View Data Source This function gives access to view a Data Source
STRESSDEF Stress Definition The user mapped to this function can define the stress
SUM_WF Summary Access to The user mapped to this function can View Summary of
Workflow and Process Workflow and Process definitions
Definitions
SYSADM System Administrator The user mapped to this function will be a system
administrator
SYSATH System Authorizer The user mapped to this function will be a system
authorizer
TASKCANCEL Cancel Task The user mapped to this function can Cancel Task
TECHAUTH Authorize Technique The user mapped to this function can authorize
techniques
TECHDEF Add Technique The user mapped to this function can define techniques
TRANS_DOC Access to Transfer The User mapped to this function will have access to
Documents Ownership Transfer Documents Ownership
TRCPROCESS Trace Process The user mapped to this function can trace process tree
TRCRULE Trace Rule The user mapped to this function can trace Rule
TRCRUN Trace Run The user mapped to this function can trace Run
UACCR User Access Report This function displays Report for user access rights
UADAR User Admin Activity This function displays Report for various activities of
Report user
UAMADMNREP UAM AdminActivity The user mapped to this function can access the UAM
Reports Screen AdminActivity Reports Screen
UATTR User Attribute Report This function displays Report for various user attributes
UDFADD Add UDF This function gives access to add an User Defined
Function
UDFAUTH Authorize UDF This function gives access to authorize an User Defined
Function
UDFCOPY Copy UDF This function gives access to copy an User Defined
Function
UDFDEL Delete UDF This function gives access to delete an User Defined
Function
UDFEDIT Edit DUDF This function gives access to edit an User Defined
Function
UDFLAT Make Latest UDF This function gives access to make latest a User Defined
Function
UDFPURGE Purge UDF This function gives access to purge an User Defined
Function
UDFSUMM UDF Summary This Function gives user access to the User Defined
Function Summary
UDFVIEW View UDF This function gives access to view an User Defined
Function
UGDOMMAP User Group Domain The user mapped to this function can access the User
Map Screen Group Domain Map Screen
UGFLROLMAP User Group Folder Role The user mapped to this function can access the User
Map Screen Group Folder Role Map Screen
UGMAINT User Group The user mapped to this function can access the User
Maintenance Screen Group Maintenance Screen
UGMAP User Group User Map The user mapped to this function can access the User
Screen Group User Map Screen
UGROLMAP User Group Role Map The user mapped to this function can access the User
Screen Group Role Map Screen
UPLOADSCN Upload Scenario The user mapped to this function can upload the
scenario data
USRACTREP User Activity Reports The user mapped to this function can access the User
Screen Activity Reports Screen
USRATH User Authorization The user mapped to this function can access the User
Screen Authorization Screen
USRATTUP User Attribute Upload The user mapped to this function can access the User
Screen Attribute Upload Screen
USRBATMAP User-Batch Execution The user mapped to this function can access the User-
Mapping Screen Batch Execution Mapping Screen
USRMAINT User Maintenance The user mapped to this function can access the User
Screen Maintenance Screen
USRPOPREP User Id Population The user mapped to this function can access the User Id
Reports Screen Population Reports Screen
USRPROFREP User Profile Report The user mapped to this function can access the User
Screen Profile Report Screen
USRROLREP User Role Reports The user mapped to this function can access the User
Screen Role Report Screen
USTATR User Status Report This function displays Report for deleted, disabled,
logged in, authorized and idle users
VARDEF Variable Definition The user mapped to this function can view/add the
Variable definitions
VARSHKDEF Variable Shock The user mapped to this function can define the variable
Definition shocks
VEU_MAP View Map The user mapped to this function can VIEW Map
definitions
VIEWLOG View log The user mapped to this function will have rights to view
log
VIEWMRE View Manage Run The user mapped to this function can view the request
for Run execution
VIEWPROC View Process The user mapped to this function can view the process
tree definitions
VIEWRULE View Rule The user mapped to this function can view the rules
definitions
VIEWRUN View Run The user mapped to this function can view the run
definitions
VIEW_F_KBD View Flexible KBD The user mapped to this function can view summary of
Flexible KBD
VIEW_HOME View APP Landing View the APP Landing Home Screen from Forms
Home Screen from Framework
Forms Framework
VIEW_RESTR View Restructure The user mapped to this function can view summary of
Restructure
VIEW_WF View Workflow and The user mapped to this function can View Workflow
Process Definitions and Process definitions
VIEW_WF_M View Workflow and The user mapped to this function can View Workflow
Process Monitor and Process Monitor
WEBSRVR Web Server Screen The user mapped to this function can access the Web
Server Screen
WFADMLINK Link Access to Process The user mapped to this function will have rights to
Admin open Process Admin
WFDELLINK Link Access to Process The user mapped to this function will have rights to
Delegation open Process Delegation
WF_DLG_ADM Delegation Admin The user mapped to this function will have rights to be
delegation admin
WRTPR_BAT Write-Protected Batch The user mapped to this function can access the Write-
Screen Protected Batch Screen
XLADMIN Excel Admin The user mapped to this function can define Excel
Mapping
XLUSER Excel User The user mapped to this function can Upload Excel Data
Guest HBRACC
Guest HIERACC
Guest MAPPR_ACSS
Guest MDBACCESS
Guest MIGACC
Guest MREACC
Guest ORCUB_ACSS
Guest PTACC
Guest RESTRACC
Guest RESTRSUMM
Guest RLACC
Guest RNACC
Guest WFACC
Guest WFREAD
Guest XLATMACCES
Guest ALIAS_ACSS
Guest BATCH_ACSS
Guest BPROC_ACSS
Guest BUDIM_ACSS
Guest BUHCY_ACSS
Guest BUMSR_ACSS
Guest CATACC
Guest DEFQACCESS
Guest DI_ACCESS
Guest DMMACC
Guest DOCMGMTACC
Guest DQACC
Guest DRENT_ACSS
Guest DTSET_ACSS
Guest DT_ACCESS
Guest ESCUB_ACSS
Guest EXPACC
Guest FFWACCESS
Guest FILACC
Guest FMCACCESS
Guest F_KBDACC
OFSAA Support
Raise a Service Request (SR) in My Oracle Support (MOS) for queries related to OFSAA Applications.