Siebel Automation Testing PDF
Siebel Automation Testing PDF
Siebel Automation Testing PDF
Author
Published by
MD AZIZUDDIN AAMER
[email protected]
PREFACE
This book covers Component , Product , Operating Testing with respect to
Siebel. It also covers the Testing life cycle. It covers both White Box as well as
Black Box Testing. This also covers Unit Testing ( UT ) , Integration Testing (IT )
as well User Acceptance Testing ( UAT ).It covers entry and exit criteria for all
testing covered as well as the deliverables.
Task Based UI
InkData Control
New API called Siebel Test Optimizer (formerly known as Siebel Test
Express)
It also covers Recording Siebel Load Test Script , Siebel Correlation Library ,
Validation,
Text Matching Test , Server Response Test & parameterization.
3.
Premature Release Risk - Ability to determine the risk associated with releasing
unsatisfactory or untested software products.
74
4.
Business Risks - Most common risks associated with the business using the
software.
74
5.
Risk Methods - Strategies and approaches for identifying risks or problems
associated with implementing and operating information technology, products, and
processes; assessing their likelihood, and initiating strategies to test for those risks. 74
B. Managing Risks
74
1.
Risk Magnitude - Ability to rank the severity of a risk categorically or
quantitatively.
74
2.
Risk Reduction Methods - The strategies and approaches that can be used to
minimize the magnitude of a risk.
74
3.
Contingency Planning - Plans to reduce the magnitude of a known risk should
the risk event occur.CSTE Body of Knowledge Skill Category 6
74
A. Pre-Planning Activities
74
1.
Success Criteria/Acceptance Criteria - The criteria that must be validated
through testing to provide user management with the information needed to make an
acceptance decision.
74
2.
Test Objectives - Objectives to be accomplished through testing.
74
3.
Assumptions - Establishing those conditions that must exist for testing to be
comprehensive and on schedule; for example, software must be available for testing on
a given date, hardware configurations available for testing must include XYZ, etc. 74
4.
Entrance Criteria/Exit Criteria - The criteria that must be met prior to moving
to the next level of testing, or into production.
74
B. Test Planning
74
1.
Test Plan - The deliverables to meet the tests objectives; the activities to
produce the test deliverables; and the schedule and resources to complete the activities.
74
2.
Requirements/Traceability - Defines the tests needed and relates those tests to
the requirements to be validated.
74
3.
Estimating - Determines the amount of resources required to accomplish the
planned activities.
74
4.
Scheduling - Establishes milestones for completing the testing effort.
74
5.
Staffing - Selecting the size and competency of staff needed to achieve the test
plan objectives.
75
6.
Approach - Methods, tools, and techniques used to accomplish test objectives. 75
7.
Test Check Procedures (i.e., test quality control) - Set of procedures based on
the test plan and test design, incorporating test cases that ensure that tests are
performed correctly and completely.
75
C. Post-Planning Activities
75
1.
Change Management - Modifies and controls the plan in relationship to actual
progress and scope of the system development.
75
2.
Versioning(change control/change management/configuration management)
- Methods to control, monitor, and achieve change.
75
A. Design Preparation
75
1.
Test Bed/Test Lab - Adaptation or development of the approach to be used for
test design and test execution.
75
2.
Test Coverage - Adaptation of the coverage objectives in the test plan to specific
system components.
75
B. Design Execution
75
1.
Specifications - Creation of test design requirements, including purpose,
preparation and usage.
75
2.
Cases - Development of test objectives, including techniques and approaches for
validation of the product. Determination of the expected result for each test case.
75
3.
Scripts - Documentation of the steps to be performed in testing, focusing on the
purpose and preparation of procedures; emphasizing entrance and exit criteria.
75
4.
Data - Development of test inputs, use of data generation tools. Determination of
the data set or sub-set needed to ensure a comprehensive test of the system. The ability
to determine data that suits boundary value analysis and stress testing requirements. 75
A. Execute Tests - Perform the activities necessary to execute tests in accordance with
the test plan and test design (including setting up tests, preparing data base(s), obtaining
technical support, and scheduling resources).
75
B. Compare Actual versus Expected Results - Determine if the actual results met
expectations (note: comparisons may be automated).
75
C. Test Log - Logging tests in a desired form. This includes incidents not related to
testing, but still stopping testing from occurring.
75
D. Record Discrepancies - Documenting defects as they happen including supporting
evidence.
75
A. Defect Tracking
76
1.
Defect Recording - Defect recording is used to describe and quantify deviations
from requirements.
76
2.
Defect Reporting - Report the status of defects; including severity and location.
76
3.
Defect Tracking - Monitoring defects from the time of recording until
satisfactory resolution has been determined.
76
B. Testing Defect Correction
76
1.
Validation - Evaluating changed code and associated documentation at the end
of the change process to ensure compliance with software requirements.
76
2.
Regression Testing - Testing the whole product to ensure that unchanged
functionality performs as it did prior to implementing a change.
76
3.
Verification - Reviewing requirements, design, and associated documentation to
ensure they are updated correctly as a result of a defect correction.
76
A. Concepts of Acceptance Testing - Acceptance testing is a formal testing process
conducted under the direction of the software users to determine if the operational
software system meets their needs, and is usable by their staff.
76
B. Roles and Responsibilities - The software testers need to work with users in
developing an effective acceptance plan, and to ensure the plan is properly integrated into
the overall test plan.
76
C. Acceptance Test Process - The acceptance test process should incorporate these
phases:
76
1.
Define the acceptance test criteria
76
2.
3.
98
99
100
101
102
103
104
105
106
107
109
110
111
113
114
115
117
118
120
121
123
124
125
126
127
130
131
132
133
134
135
136
137
145
150
151
152
153
154
155
160
161
162
169
186
187
188
190
191
192
194
196
199
200
Decision Rules:
Decision Step Details
Decision Rules: Decision Step Details
200
201
Process Properties
A Word about Process Properties
203
203
206
A Word about
206
207
208
208
Developers can also develop or modify workflows using Siebel Tools connected to the
development database by locking the project in the master repository. This way, they do
not need to make sure that all the list-of-values are made available to the local database
Event Logs
209
Event Logs
210
Migrate to Production
211
Using Siebel State Model
215
Business Challenge and Solution
215
Siebel State Model
215
Siebel Business Rules Using HaleyAuthority
216
About HaleyAuthority
216
Advantages of HaleyAuthority
217
Haley Architecture
219
219
220
221
222
223
224
225
226
227
228
229
230
231
Start With T_
Used to hold temporary values and status used during processing step.
Data Mapping
Run EIM
Verify Results
Process Flow Between Siebel Database and other Database
231
231
231
232
232
232
232
232
232
232
232
233
Data Mapping
233
Data Mapping
234
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
283
284
286
SubOrganization Visibility
Sub-Organization Visibility
Access Group
Why Another Visibility control mechanism ?
286
287
288
289
What are
Access Groups ?
What are Access Groups ?
289
290
?
What are Access Groups ?
Groups ?
What are Access Groups ?
How Access
Group Addresses Complex Requirements
How Access Group Addresses Complex Requirements
Behavior of Community and Catalog - Catalog
Overview of EAI
Need for Integration
Basic Integration Tasks
Identify the Data to Integrate in Each Application
Map and Transform the Data from Each Application
Transport the Data Between Applications
Traditional Application Integration
Features of an Integrated Environment
Integration Approaches
Siebel Integration Strategies
Workflow for EAI
Elements of Workflow for EAI
EAI Dispatch Service
Virtual Business Components
eBusiness Connectors
Enterprise Integration Manager (EIM)
Object Interfaces
Other EAI Strategies
Comparing EAI Strategies
Matching EAI Strategies to Integration Approaches
1. Synchronize Siebel Data with External Data
292
293
294
295
296
297
298
299
300
301
302
303
305
306
307
308
309
310
311
312
313
314
315
317
365
366
367
368
369
370
371
372
373
376
378
379
380
381
383
384
386
387
390
391
392
396
398
399
400
410
413
438
441
441
441
441
441
442
442
443
443
443
443
444
444
445
446
446
446
446
447
447
450
451
451
451
451
451
452
452
452
453
454
454
454
455
456
456
458
460
461
465
470
471
472
473
474
475
476
477
478
479
480
482
483
484
485
488
489
490
491
492
493
494
494
495
496
497
498
498
499
499
500
Siebel Buttons
500
501
Siebel Picklists
Siebel Picklists
501
502
Confirmation Dialogs
Confirmation Dialogs
502
503
Siebel Menus
503
504
Search Button
504
505
505
506
Parameterization
Parameterization
506
507
507
Object
Identification
Object Identification
Identification Preferences
Siebel Object Identification Preferences
508
509
Siebel Object
509
510
511
Object Test
Object Test
511
512
513
513
513
514
514
Table Test
Table Test
515
516
516
517
Recording a Siebel
517
518
521
523
524
525
526
527
Test Framework
Within the development life cycle, for each standard test stage there are specific testing
tasks during the testing life cycle.
These tasks are illustrated relative to the Testing Framework:
Component Test
Assembly Test
Product Test
User Acceptance Test
Performance Test
Test Stages:
Technical Architecture Team
Here are the testing stages for the Technical Architecture team:
Functional Testing
The different test stages can be categorized as two types: functional & technical testing.
Here are the different functional tests:
Component Test
Assembly Test
Product Test
- Application product test
- Integration product test
User Acceptance Test
Technical Testing
Here are the different technical tests:
Performance Test - This test is carried out to ensure that a release is capable of
operating at the load levels specified in the business performance requirements
and any agreed-on Service Level Agreements (SLAs).
Component test ensures that the logic implemented in the module scope satisfies
the modules requirements
The component test stage spans a number of individual component tests that
prove the low-level functionality of all end system components
The component test also validates that a modules code reflects the appropriate
detailed logic set forth in the design specification
A component test condition denotes a unique path through the modules logic.
The scope of the test condition encompasses logical branches, limits, etc.
Inputs:
Deliverables:
Test Plan
Checklists:
Component Test Entry Criteria
Component Test Exit Criteria
Assembly Test:
Assembly test ensures that related components function properly when assembled
Testing the application component interfaces helps verify that correct data is
passed between components
At the completion of assembly testing, all component interfaces in the application
are executed and proven to work according to specifications
It is imperative that the assembled components be fully tested before they are
migrated to product test
Deliverables:
Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet
Inputs:
Test Plan
Common Test Data
Deliverables:
Test Closure Memo
Common Test Data
Test Plan
Clearly define the boundary between component testing and assembly testing
Focus this test stage on the interfaces between components
Pay attention to the efficiency of the assembly test
Plan for regression testing
Ensure adequate assembly testing
Consider the impact of object development
Coincide the assembly testing schedule with the delivery of assemblies
Creating assembly test plans for object-oriented applications requires more effort
Prioritize the testing efforts
Plan for migration
Refine test plans after running test scripts once
Foster collaboration between testers and developers
Set up the common test data for testing multiple applications
Product Test:
Product testing ensures that the following requirements have been met:
Fit/Gap analysis
Application inventories
Integration design
Business process design
Requirements
Testing Strategy
Metrics
Deliverables:
Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet
Checklists:
Deliverables:
Checklists:
User acceptance test ensures that the solution meets the original functional and
business requirements and is acceptable to the client.
UAT may span the same areas as application and integration product testing, but
design it with help from the business sponsors and end users.
Deliverables:
Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet
Inputs:
Test Plan
Common Test Data
Deliverables:
Test Closure Memo
Test Plan
Checklists:
User-Accepted Application Checklist
To identify and fix system performance issues before the system goes live. This
generally includes load test, stress test, stability test, throughput test, and ongoing
performance monitoring.
Inputs:
Fit/Gap Analysis
User Scenario
Application Inventories
Deliverables:
Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet
Test Plan
Common Test Data
Deliverables:
Checklists:
Key Considerations
Plan technical architecture performance tests
Link technical infrastructure sizing activities with performance test planning
Focus on quality requirements
Model the test environments after the production environment
Conduct technical architecture performance test prior to application performance
test
Refine test plans after running the first test script set
Use performance testing tools
Return performance test modifications to product test for functional verification
Prepare to handle performance problems with conversion programs
Trace back to design defects when fixing performance problems
Obtain live data input files for the performance test
Tune the application rather than the infrastructure
Budget extra time to resolve defects
Deliverables:
Test Plan
Migration Procedures
Migration Verification Scripts
Migration Request Form
Checklists:
Inputs:
Metrics
Service Introduction Plan
Service Level Agreement
Test Plan
Training Materials
Performance Support Materials
Deliverables:
Test Closure Memo
Checklists:
Application Operability Checklist
Operations Test Entry Criteria
Operations Test Exit Criteria
Service Level Test Entry Criteria
Service Level Test Exit Criteria
Key Considerations:
1. Risk Analysis
A. Risk Identification
1. Software Risks - Knowledge of the most common risks associated with
software development, and the platform you are working on.
2. Testing Risks - Knowledge of the most common risks associated with
software testing for the platform you are working on, tools being used, and
test methods being applied.
3. Premature Release Risk - Ability to determine the risk associated with
releasing unsatisfactory or untested software products.
4. Business Risks - Most common risks associated with the business using
the software.
5. Risk Methods - Strategies and approaches for identifying risks or
problems associated with implementing and operating information
technology, products, and processes; assessing their likelihood, and
initiating strategies to test for those risks.
B. Managing Risks
1. Risk Magnitude - Ability to rank the severity of a risk categorically or
quantitatively.
2. Risk Reduction Methods - The strategies and approaches that can be
used to minimize the magnitude of a risk.
3. Contingency Planning - Plans to reduce the magnitude of a known risk
should the risk event occur.CSTE Body of Knowledge
Skill Category 6
2. Test Planning Process
A. Pre-Planning Activities
1. Success Criteria/Acceptance Criteria - The criteria that must be
validated through testing to provide user management with the
information needed to make an acceptance decision.
2. Test Objectives - Objectives to be accomplished through testing.
3. Assumptions - Establishing those conditions that must exist for testing to
be comprehensive and on schedule; for example, software must be
available for testing on a given date, hardware configurations available for
testing must include XYZ, etc.
4. Entrance Criteria/Exit Criteria - The criteria that must be met prior to
moving to the next level of testing, or into production.
B. Test Planning
1. Test Plan - The deliverables to meet the tests objectives; the activities to
produce the test deliverables; and the schedule and resources to complete
the activities.
2. Requirements/Traceability - Defines the tests needed and relates those
tests to the requirements to be validated.
3. Estimating - Determines the amount of resources required to accomplish
the planned activities.
4. Scheduling - Establishes milestones for completing the testing effort.
4. Performing Tests
A. Execute Tests - Perform the activities necessary to execute tests in accordance
with the test plan and test design (including setting up tests, preparing data
base(s), obtaining technical support, and scheduling resources).
B. Compare Actual versus Expected Results - Determine if the actual results met
expectations (note: comparisons may be automated).
C. Test Log - Logging tests in a desired form. This includes incidents not related to
testing, but still stopping testing from occurring.
D. Record Discrepancies - Documenting defects as they happen including
supporting evidence.
5. Defect Tracking and Correction
A. Defect Tracking
1. Defect Recording - Defect recording is used to describe and quantify
deviations from requirements.
2. Defect Reporting - Report the status of defects; including severity and
location.
3. Defect Tracking - Monitoring defects from the time of recording until
satisfactory resolution has been determined.
B. Testing Defect Correction
1. Validation - Evaluating changed code and associated documentation at
the end of the change process to ensure compliance with software
requirements.
2. Regression Testing - Testing the whole product to ensure that unchanged
functionality performs as it did prior to implementing a change.
3.
Verification - Reviewing requirements, design, and associated
documentation to ensure they are updated correctly as a result of a defect
correction.
6. Acceptance Testing
A. Concepts of Acceptance Testing - Acceptance testing is a formal testing process
conducted under the direction of the software users to determine if the
operational software system meets their needs, and is usable by their staff.
B. Roles and Responsibilities - The software testers need to work with users in
developing an effective acceptance plan, and to ensure the plan is properly
integrated into the overall test plan.
C. Acceptance Test Process - The acceptance test process should incorporate these
phases:
1. Define the acceptance test criteria
2. Develop an acceptance test plan
3. Execute the acceptance test plan
7. Status of Testing
Metrics specific to testing include data collected regarding testing, defect tracking,
and software performance. Use quantitative measures and metrics to manage the
planning, execution, and reporting of software testing, with focus on whether goals
are being reached.
A. Test Completion Criteria
1. Code Coverage - Purpose, methods, and test coverage tools used for
monitoring the execution of software and reporting on the degree of
coverage at the statement, branch, or path level.
2. Requirement Coverage - Monitoring and reporting on the number of
requirements exercised, and/or tested to be correctly implemented.
B. Test Metrics
1.
2.
3.
4.
5.
8. Test Reporting
A.
1. Graph- Based Testing: Identifying the objects and relationships between them. And
then testing whether the relationship behave as expected or not. Graphs are used to
prepare the test cases, in which: objects are represented as Nodes and relationships as
Links.
2. Eqivalence Class Testing: A set of input domain is partitioned into different classes.
And selecting the test data from each class. The equivalence class represents the valid
and invalid states for input conditions.
3. Boundary Value Testing: It is carried out by selecting the test cases that exercises
bounding values. For eg, if a test case accepts values with the range (a to d), then testing
the behaviour at a and d.
4. Comparison Testing: Also called back-to-back testing. In this method, different
software teams build a product using the same specification but different technologies
and methodologies. After that all the versions are tested and their ouput is compared. Its
not a full proof testing method since even if they all give the same results, if one is
incorrect all of them will be incorrect.
Inputs causing
anomalous
behaviour
System
Oe
Analyze Phase
Analy
ze
Execute
Design
Imple
ment
Test basis:
Design Phase
Analy
ze
Desi
gn
Exec
ute
Imple
ment
Test Strategy: The distribution of the test effort and coverage aimed at finding the
most important defects as soon as possible
Test Case#/Title
Scenario/Description:
Prerequisites
Access Path
Test Case Author/date
Test Case Actor(s)/Role
Environment
Step #/Description
Expected Results
Actual Results
Pass/Fail
Implement Phase
Analyz
e
Execut
e
Desig
n
Implem
ent
Logical Test case: A series of situations to be tested from start to finish for
running the test object.
Physical Test case: Detailed description of the logical test case containing a
starting situation, actions and result checks.
Execute Phase
Analy
ze
Exec
ute
Desig
n
Imple
ment
Test Execution
Defect management: Logging, tracking, retesting
Test result reporting: Pass/Fail report
Siebel Testing
Two main areas of focus:
Functionality
Does the application function properly?
Performance
Does the application perform properly under load/stress?
Functional Testing is initiated following compilation and check-in of the unittested work package.
Comprehensive test of configurations, modifications or customizations made to a
component.
Includes UI testing in case of any modifications to the GUI.
It does not focus on end-to-end system functionality.
Follows Black box testing techniques
Functionality will be tested end to end and Interfaces will be tested implicitly
through functionality
Local Data Structure: Ensures that the data stored temporarily maintains the
integrity during all steps in an algorithms execution
Statement testing
FT, IT, SIT, and RT will use Black Box Testing techniques
Refers to conducting tests by knowing the specified function that a product has
been designed to perform
Boundary Value Analysis: Test the boundary value itself and a significant value
on either side of the boundary
State Transition: Test the states the system can be in, the transitions between those
states, the actions that cause the transitions, and the actions that may result from
the transitions.
Equivalence Partitioning: Identify the partition of values and select representative
values from within this partition to test
Thread testing: test the business logic in the same way a user or an operator might
interact with the system during its normal use
Error Handling: determines the ability of the system to properly process incorrect
transactions.
Testware
Time Pressure
Lack of Planning
Insufficient resources
Unclear specifications & documentation
Lack of management and control
Conflicting interests
Requirements: UCBT**
Processes: Using RUP
Roadmap: Using Siebel eRoadmap
Architecture: Testing SOA
Risks: using T-MAP
Skills: Developing foundational skills
Primary actor the actor which initiates a use case to satisfy a goal
Secondary actor an actor which collaborates to support the completion
of the Use Case.
UCBT
Using RUP
What is RUP?
RUP Processes
RUP processes are organized as follows:
Siebel eRoadmap
Define
goals
Discover
detailed
business
requirements
Design
Perform
Validate
Deploy
Monitor
solution configuration application application progress
Testing SOA
The Overall goal of testing remains the same
However, some of the risks have to do with
Loose Coupling of services
Heterogeneous technologies within a solution
Lack of total control over all elements of a solution
Example: Third Party services
Lifecycle
T
I
Lifecycle Phases
Planning and Control (P&C)
Preparation (P)
Specification (S)
Execution (E)
Completion (C)
S
P&C
Techniques
Organization
Organization Areas
Infrastructure
Includes all facilities and resources required for structured testing namely:
Test Environment
Test Tools
Office Environment
An installed solution designed for companies with fewer than 100 users
Accounts
Contacts
Opportunities
Orders
Service requests
Activities
Assets
Accounts
Are businesses external to your company
Contacts
Are people with whom you do business
Have the following characteristics
A name
A job title
An email address and phone number
Opportunities
Are potential revenue-generating events
Have the following characteristics
A possible association with an account
An identified potential revenue
A probability of completion
A close date
Orders
Are products or services purchased by your customers
Have the following characteristics
An order number
A status and priority
An associated account
Service Requests
Are requests from customers for information or assistance with a problem related
to products or services purchased from your company
Have the following characteristics
A status
A severity level
A priority level
Activities
Are specific tasks or events to be completed
Have the following characteristics
A start date and due date
A priority level
Assigned employees
Assets
Are instances of purchased products
Have the following characteristics
An asset number
A product and part number
A status level
Allows your sales force to manage accounts, sales opportunities, and contacts
Siebel Sales
Opportunities view
Allows companies to develop, manage, and deliver dynamic product catalogs across all
customer channels
eSales Catalog
screen
Advisor
Browse products
Quick Add to
shopping cart
Partner Portal
Opportunities screen
Plan
Define
Define
Define
Discover
Discover
Discover
Design
Design
Design
Configure
Configure
Configure
Validate
Validate
Validate
Deploy
Deploy
Deploy
Sustain
Multiple
implementation
phases
Siebel Data
Is organized and stored in normalized tables in a relational database
Table
S_PROD_INT
R
O
W
_I
D
N
A
M
E
P
A
RT
_N
U
M
U
O
M
_C
D
Columns (store
single values only)
Primary Key
Is a column that uniquely identifies each row in a table
S_PROD_INT
Primary Key
(PK)
R
O
W
_I
D
N
A
M
E
P
A
RT
_N
U
M
U
O
M
_C
D
ROW_ID
Is a column in every table
Contains a Siebel application-generated identifier that is unique across all
tables and mobile users
Is the means by which Siebel applications maintain referential integrity
Database referential integrity constraints are not used
Is managed by Siebel applications and must not be modified by users
Tables
Approximately 3,000 tables in the database
Interfac
e
Dat
a
S_PROD_INT
EIM_PROD_INT
Repositor
y
S_TABLE
R
O
W
_I
D
R
O
W
_I
D
R
O
W
_I
D
N
A
M
E
P
A
R
T_
N
U
M
U
O
M
_C
D
N
A
M
E
P
A
R
T_
N
U
M
U
O
M
_C
D
N
A
M
E
D
E
S
C_
TE
XT
A
LI
A
S
TY
P
E
Data Tables
Store the user data
Business data
Administrative data
Seed data
Internal
Product
S_PROD_INT
R
O
W
_I
D
N
A
M
E
P
A
R
T_
N
U
M
R
O
W
_I
D
U
O
M
_C
D
S_CONTACT
R
O
W
_I
D
S_SRV_REQ
L
A
ST
_N
A
M
E
FS
T_
N
A
M
E
MI
D_
N
A
M
E
Contac
t
S
R_
N
U
M
D
E
S
C_
TE
XT
Service
Request
O
W
N
E
R_
E
M
P_
ID
R
E
S
O
L
U
TI
O
N_
C
D
S_OPTY
R
O
W
_I
D
B
D
G
T_
A
M
T
N
A
M
E
Opportunit
y
P
R
O
G
_N
A
M
E
ST
G
_N
A
M
E
Interface Tables
Are a staging area for importing and exporting data
Are used only by the Enterprise Integration Manager server component
Are named with prefix EIM_
Repository Tables
Contain the object definitions that specify one or more Siebel applications
Client application configuration
UI, business, and object definitions
Mappings used for importing and exporting data
Rules for transferring data to mobile clients
Are updated using Siebel Tools
Columns
Each table has multiple columns to store user and system data
Defined by the Column child object definitions
Columns determine the data that can be stored in that table
Column Properties
Important properties of columns
Properties of existing tables and columns should not be edited
Understanding these properties is important
Determines the size and type of data that can be stored in a column
Limits proposed modifications to a standard application
Identifies type
and size of data
System Columns
Exist for all tables to store system data
Are maintained by Siebel applications and tasks
Can be viewed by right-clicking the record or from Help > About Record
User Key
Specifies columns that must contain a unique set of values
Prevents users from entering duplicate records
Index
Is a separate data structure that stores a data value for a column and a pointer to
the corresponding row
Are used to retrieve and sort data rapidly
Can be created or upgraded through Siebel Tools
Should be inspected to assess performance issues for query and sort operations
Sequence affects
the sort order in
business
components
Product Line
Asset
S_PROD_LN
S_PROD_INT
S_ASSET
R
O
W
_I
D
R
O
W
_I
D
R
O
W
_I
D
N
A
M
E
D
E
S
C_
TE
XT
M:M relationship
N
A
M
E
P
A
R
T_
N
U
M
U
O
M
_C
D
A
S
S
ET
_N
U
M
1:M relationship
M
F
G
D_
D
T
S
E
RI
A
L_
N
U
M
1:M Relationships
Are captured using foreign key table columns in the table on the many side of the
relationship
S_ASSET
R
O
W
_I
D
R
O
W
_I
D
N
A
M
E
PA
R
T_
N
U
M
U
O
M
_C
D
AS
SE
T_
N
U
M
M
F
G
D_
D
T
MI
D_
N
A
M
E
PR
O
D_
ID
M:M Relationships
Are captured using foreign key table columns in a third table called the
intersection table
Provides additional columns for business components referencing the base table
A base and extension table can be considered as a single logical table
Base
table
S_PROD_INT
S_PROD_INT_X
R
O
W
_I
D
R
O
W
_I
D
N
A
M
E
P
A
R
T_
N
U
M
U
O
M
_C
D
P
A
R_
R
O
W
_I
D
A
TT
RI
B_
39
Extension
table
Stores the
Stock Level
field
Is used:
To provide flexibility for both Siebel engineering and customer use
To support multiple business components referencing the S_PARTY table
(discussed in next module)
Allows you to track entities that do not exist in the standard Siebel applications
S_CONTACT
S_CONTACT_XM
R
O
W
_I
D
R
O
W
_I
D
FS
T_
N
A
M
E
L
A
ST
_N
A
M
E
E
M
AI
L_
A
D
D
R
P TY N
A P A
R_ E M
R
E
O
W
_I
D
A
TT
RI
B_
01
PAR_ROW_ID column
stores foreign key to
ROW_ID in main table
TYPE column distinguishes between
different types of data in the table
Overview of Scripting
Siebel Scripting Terms
Term
Definition
Siebel Scripts
Are procedures that enable configuration of Siebel applications to extend standard
behavior
Are added using the Script Editor or by importing text files
Use a common syntax and commands
Are written in one of the following
Siebel Visual Basic (SVB)
Similar to Visual Basic for Applications (VBA)
Used only in Windows platforms
Siebel eScript
Similar to JavaScript
Case-sensitive, including method names
Used in Windows and UNIX platforms
Are executed by event handlers when specified events occur
An event handler is the Siebel code that executes in response to the event
Example: When the user steps off a record being edited (the event),
the application responds by committing the record to the database
(the event handler)
Browser Scripts
Execute in and are interpreted by the Web browser
Are written in eScript (JavaScript)
Interact with the Document Object Model (DOM)
Interact with the Siebel Object Model in the browser via the Browser Interaction
Manager
Enable developers to script the behavior of:
Siebel events
Browser events exposed by the DOM
Server Scripts
Execute within the Object Manager
Are written in eScript or Siebel Visual Basic
Enable developers to script the behavior of:
Business components via business component scripts
Business services via business service scripts
Applications via application scripts
Applets via applet Web scripts
Use event handlers for the various events exposed by the scripting model
Workflow Process
Workflow Step
Is an operation that begins,
performs its function, and ends
Is dragged and dropped into a
workflow from the Palette
Events Workflow processes can be invoked from events in the Siebel application or from
external systems. Events can pass context from the caller, user session, for example to a
workflow using a row-id.
Rules The flow control is based on rules. Rules can be based on business component data or
local variables, known as process properties. Rule expressions are defined using Siebel Query
Language.
Actions Action can perform database record operations or invoke Business Services. Also,
an action can cause a pause in the workflow.
Data is created or updated as the process executes. There are basically three types of data a
workflow process will operate on, business business component data, processprocess properties,
and Siebel Common Object data. You can think of process properties as local variables that are
active during the process. The variables are used as inputs and outputs to the various steps in a
process.
There are basically three ways to invoke a workflow workflow process, through workflow policies
workflow policies, run-time events, and Siebel Tools object events.
Workflow policies allow you to define policies, or rules that can act as triggers to execute a
workflow process. The basic construct of a policy is a rule. If all the conditions of a rule are true,
then an action occurs. Typical usages of a workflow policy are EIM batch, EAI inserts and
updates, manual changes from the user interface, assignment manager assignments and Siebel
remote synchronization.
When deciding whether to implement a workflow policy versus a workflow process there are
some additional things you may want to consider. Data coming into the Siebel application via the
data layer, such as EIM and MQ channels, and those that cannot be captured via the business
layer are typically good candidates for a workflow policy. Some features not supported by
workflow processes like eMail Consolidation, Duration, and Quantity are also candidates for
workflow policies. However, workflow processes provide a better platform for development and
deployment, complex comparison logic, flow management (if, then, else, or case), leverages
business layer logic, can invoke Business Services and pause/stop/error handling capability
Workflow Policy
Monitors the server database
Invokes a workflow process
when a condition occurs
Runs the server components
Workflow Process Manager
(WfProcMgr) and Workflow
Process Batch Manager
(WfProcBatchMgr)
To invoke a workflow with
steps that call EAI adapters
from a workflow policy, create
a workflow policy action based on
the Run Integration Process program
Workflow Process and Runtime events ensure most events are captured at the business
layer logic level. However there are business scenarios where the Workflow Policy
Manager would be the best alternative. Workflow Policy Manager ensures business logic
is captured at the data layer of Siebel architecture. Some examples of such scenarios
would be when bulk data uploads happen via EIM or Data Quality cleaning happens in
the data layer.
When using Workflow Policy, the data layer business policy enforcement is
done via database triggers.
When a particular policy is violated, underlying triggers capture database events
into a Workflow Policy Manager's queuing table (S_ESCL_REQ).
Workflow Policy Manager component (Workflow Monitor Agent) polls this table
and processes requests by taking appropriate actions defined. In some cases,
actions might be to invoke Workflow Process Manager.
Workflow Policy Manager provides additional scalability by using an additional
component called Workflow Action Agent that can be executed on a different
application server within the Siebel Enterprise.
There are several different ways to implement business rules in a Workflow process. The
following chart shows the major ways and a comparison on when to use them.
Decision steps exit with multiple branches. For each branch a conditional statement is evaluated.
A conditional statement compares any two of the following:
process properties,
business component fields or
literal values.
The Compose Condition Criteria dialog box is shown in Figure 7. This example shows a branch in
a workflow where the branch would be followed if the Severity field from a Service Request
matched the value 1-Critical.
Actions
There are several ways to affect actions in a workflow. In other words, data is taken as an input, a
transformation takes place and data is produced as output. The Table 4 below shows the major
ways to cause a transformation with some explanation of how to make design decision on how to
use them.
service, the business service method, input arguments (pass in Process Property, BusComp
data, or literal value) and output arguments.
DataTransfer
GetActiveViewProp
QueueMethod
TryMockMethod.
2. FINS Validator
Description: Validate data based on predefined rules. It is developed through Application
Administration and not script. Also, supports custom messages.
Available Methods:
Validate
3. FINS Dynamic UI Service
Description: Allows creating and rendering of read-only views with a single read-only applet
based on user input. Administered through admin views and not script.
Available Methods:
AddRow
DeleteRow
SetViewName
ScheduleReport
SyncOne
Siebel Operation steps allow you to perform database operations of Insert, Update and Query.
These steps are performed on business components. Once you have defined the Operation step,
you can use the Fields child object to define any field values for the step. Also, for an Update you
can use the Search Specification child object to define the records you want to update.
Examples of Operations steps include creating an Activity record when a new SR is opened or
updating a comment field if an SR has been open too long.
Process properties are used as input and output arguments for the entire process. Process
properties are used to store values that a workflow process retrieves from the database or
derives before or during processing. Decision branches can use the values in a process property
and pass properties as step arguments. When a workflow process completes, the final results of
the process properties are available as output arguments. Process property values can be used
in expressions.
With every workflow there is a set of predefined process properties that are automatically
generated when you define the workflow. These are:
The following section brings out some points with Siebel 7.7 Workflow in Siebel Tools and
developing on a local database.
Workflow is a repository object. Workflow belongs to a project. In Siebel 7.7, workflow does not
participate in the following behaviours that are standard for other repository objects
SRF workflow has its own deployment mechanism, the details of which can be found
in the Business Process Designer Administration Guide
Merge workflow does not participate in the 3-way merge. When workflow definitions
are imported into the repository, they maintain versioning provided by workflow
Object Comparison disabled for Siebel 7.7
Archive workflows do not participate in .sif archive. Instead, workflows can be
archived as XML files using workflow export utility.
Typically, developers use local database to develop workflows. When using local database,
workflow definitions need to be checked-out from the master repository.
When developing workflows in local database, it would require the local database to have all the
referenced data objects. For those data object that are not docked and hence not packaged as
part of the database extract, they would need to be imported into the local database. The
following objects are not docked and are referenced by workflow
Data Maps
Message tables
To import data maps to the local database, you would use the dedicated client connected to the
local database and use the client-side import utility. Message tables can be copied over to the
local database. Alternatively, developers can define messages using the unbounded picklist. This
allows the creation of the messages but does not check the validity of the message at definitiontime.
Developers can also develop or modify workflows using Siebel Tools connected to the
development database by locking the project in the master repository. This way, they do
not need to make sure that all the list-of-values are made available to the local database
Event Logs
More detailed information on the execution can be viewed in log files by setting event logs.
Events used for logging are as follows:
Migrate to Production
Once the workflows are tested, they are marked for deployment by clicking the Deploy button and
then checked into the master repository. Deployment of workflow is a two-step process:
1. Using Siebel Tools, workflow definitions are marked for deployment. This is done by clicking
the Deploy button in Siebel Tools.
2. Using the Siebel Client, workflows are activated from the Business Process Administration
view. The process of activation, writes the definitions from the repository tables to the runtime
tables for the workflow engine to execute.
Workflow definitions can be migrated across environments, from Development to Production for
example, using one of the following migration utilities:
1. Repository Migration Utility this utility allows export/import of all repository objects. This
utility is best used to migrate workflow definitions when the business is ready to rollout the
release (for example, migrate all repository objects)
The following lists some commonly encountered errors for Workflow Process Manager.
1. Problem: You activated your workflow but it is not executing
Solution: Verify if <Reload Runtime Events> performed. In order to tell if a process has been
triggered, turn workflow logging (EngInv, StpExec, PrcExec) on. See the Business Process
Administration Guide in Siebel Bookshelf 7.7 for procedures on how to do this.
2. Problem: You revised the workflow process and reactiviated it, but somehow the previous
workflow information was read.
Solution: For workflows running in the Workflow Process Manager server component, reset
parameter <Workflow Version Checking Interval>. By default it is 60 minutes.
3. Problem: When workflow is triggered by runtime event Display Applet, the workflow is triggered
the first time but not subsequently? Why?
Solution: Since the DisplayApplet event is a UI event, and the default web UI framework design is
to use cache, the event only got fired when the first time no-cached view is accessed. The
workflow got triggered whenever the event is fired and worked correctly. To make field still work
in this scenario, you could explicitly set EnableViewCache to FALSE in .cfg file.
4. Problem: If a buscomp has code on WriteRecord and the runtime event fires on Writerecord,
which occurs first? Solution: WriteRecord runtime event is in essence a Post-WriteRecord
event and will be fired AFTER the buscomp code is executed.
5. Problem: After you triggered workflow from a runtime event, you do not get the row-id of the
record on which the event occurred. Solution: Runtime event passes the row-id of the object
BO (i.e. primary BC) and not the row-id of the BC. Retrieve the row-id of the active BC using
searchspecs (e.g. Active_row-id (process property) = [Id] defined as Type = Expression and
BC = BC name)
6. Problem: Encountered the error <Cannot resume Process <x-xxxxx> for Object-id <x-xxxxx>.
Please verify that the process exists and has a waiting status.
Solution: This error typically occurs in the following scenario:
(1) A workflow instance is started and paused, waiting for a runtime event
(2) The runtime event fires. The workflow instance is resumed and run to completion.
(3) The runtime event fires for a second time. Workflow engine tries to resume the workflow
instance and fails, since the workflow instance is no longer in a Waiting state.
Deleting existing instances will not help. You should ignore the error message and proceed.
Steps (1)-(3) need to occur, in that order, in the same user session for the error message to be
reported. As a result, the error message would disappear when the application is re-started.
Solution: Verify that the workflow process exists, process status is set to Active, and the process
has not expired.
9. Problem: OMS-00107: (: 0) error code = 4300107, system error = 27869, msg1 = Could not
find 'Class' named 'Test Order Part A'
OMS-00107: (: 0) error code = 4300107, system error = 28078, msg1 = Unable to create the
Business Service 'Test Order Part A
Solution: Make sure at least one .srf file is copied to SIEBEL_INSTALL\objects\<lang> directory
Business challenge
Need to control field value changes for opportunities, service requests, and
activities
Need to allow only certain positions to change field values
Do not allow certain positions to change the status, or do not allow
the status change
Solution
Use State Model
About HaleyAuthority
Haley Systems, Inc. has been the recognized global leader in rule-based
programming, as well as a leading expert in automating managed knowledge,
since 1989
Set of tools for modeling business policies as rule statements in English, without
the need to employ programming languages
Allows to test, implement, and deploy rule statements. The statements are
executed by Haleys inference engine
Advantages of HaleyAuthority
Advantage of Rules
To remove an old policy, delete a rule. You can change a policy freely without
fear of damaging the rest of the program.
Haley Architecture
Uses client side configuration rather than repository based configuration and
compilation.
Proxy Service
Is a third party rules engine used to evaluate and execute the business rules at
runtime.
Accessed by calling the Business Rules Service business service. It serves as the
interface to the inference engine.
Invoked using
Siebel Base
Tables
Siebel Application
External Data
Tables
External Application
Interface Tables
Store data for export outside the Siebel database
Data brought together to represent one or more base tables
Staging area for data
User Keys
Based on multiple columns, user keys are used to uniquely identify a row for
EIM.
Note: ROW_ID here is not the generated ROW_ID used on base tables
Start With T_
Used to hold temporary values and status used during processing step.
Data Mapping
Run EIM
Verify Results
Data Mapping
Note: Some base tables may not be mapped to corresponding interface table . In
such cases, use Siebel VB to load data.Siebel VB Works on business object layer
while EIM works on data layer.
What is Upgrade ?
Migrating the application from one version to another
How to Upgrade
Upgrade Application
This is the usual route undertaken by most upgrades. This is usually when
the client wants an exactly similar system as before with minor
enhancements
Why to upgrade
To Take advantage of additional functionality provided by new version.
Benefits of Upgrading
New Functionality
Simpler Development
Upgrade Considerations
There are several areas to consider as you examine your upgrade options
including application functionality , technological enhancements , operational
considerations , user support and change management , and ongoing support
availability.
Assessment Deliverables
Upgrade Flow
BI Publisher
This major piece of functionality redefines the way users complete tasks.
Microsoft Integration
Overview
Task UI Concepts
Task Applet
Task View
Task Chapter
Task Group
Task Playbar
During task execution, transient data is stored in a temporary storage called TBCs.
When the task is cancelled, the transient data related to the task is removed from
temporary storage.
When the task is submitted, the transient data can committed to the Siebel
database using commit step or Siebel Operation Step.
The Column and Join properties of a TBC are auto-populated and are not editable:
All columns are forced active at run time, to avoid field activation problems.
Task Applet
Task Applet is designed to interact with transient data in fields of TBCs, rather
than standard fields in a regular BC.
Note:
-
These grid templates differ from standard Web templates in that they do
not display the applet title or title menu.
The applets in a task view can be either standard applets or task applets
Task View
The Task View is a view made up of either task applets and or standard ad-hoc
applets that contain a playbar for a user to navigate forwards and backward
throughout a task.
An applet menu
The standard record controls such as New, Delete, and Query
Task Chapter
A task chapter is a list of task steps, grouped under a common display name (the
chapter name).
The task chapter allows you to define a logical grouping of task steps, and
displays the chapter name alongside with the task view names in the current task
pane.
Task Group
Represents a collection of related tasks that can be displayed as a set in the task
pane
Task Playbar
The Task Playbar is an applet containing buttons that allow the end user to control
the execution of the task
iHelp
Why?
Allows to embed view navigation links and highlight important fields and buttons.
The iHelp pane appears in the left side of the application window
iHelp Pane lists all the iHelp items related to the current screen
The iHelp pane remains open until you close it even if you navigate to other
screen.
Choose Navigate > Site Map > iHelp Map > iHelp Map
Designing iHelp
Overview
Siebel Smart Script allows business analysts, call center managers, and Siebel
developers to create scripts to define the application workflow for interactive
customer communications.
The script determines the flow of the interaction, not the agent or customer.
Software-controlled workflow
Personalized interaction.
Dynamic updating.
Dashboard.
Script
Page
Question
Answer
Translation
Branch
Create Answers
Create Pages
Introduction
The Data Validation Manager business service can validate business component
data based on a set of rules.
The validation rules are defined using the Siebel Query Language and can be
conveniently constructed using the Personalization Business Rules Designer.
The business service centralizes the creation and management of data validation
rules without requiring extensive Siebel Tools configuration and does not require
the deployment of a new SRF.
The Data Validation Manager business service reduces the need for custom
scripts, decreases implementation costs, and increases application performance.
Search automatically for the proper rule set to execute based on active business
objects and views.
Write validation rules that operate on dynamic data supplied at run time together
with data from business component fields.
Security
Resources
Personal
Competitive
Segregate data
Sharing data
Management Process
Personal
Position
Responsibility
Organization
Access Group
Authorizations to resources
Person
Position
Role
Organization
Division
Maps to a company's physical structure
Organization Drives data visibility & company reporting process
Account
An external company
User List
Household
Categorization
Contact
Any individual person
User Contact with an application login
Employee
User who is associated with an internal position
Partner User User who is associated with an external position
Catalog
Group of people
Category
Product
Catalog
Hardware
Category
Software
Category
Data
Item 1
Data
Item 2
CRM
Category
Data
Item 1
Data
Item 2
Data
Item 3
Organizational Entities
ERM
Category
Data
Item 1
Data
Item 2
Data
Item 3
Position
Division
Organization
Access Group
Director
Mgr
Mgr
Product
Specialist
Product
Specialist
WW
Org
NA
Org
EMEA
Org
Sales
Marketing
Responsibility
Position
Role
Sub-Organization Visibility
Sub-Organization Visibility
Benefits
Access Group
Possible members
Organization, Position, User List, Household, Role
Ad-hoc
Can be hierarchical
Overview of EAI
Users want:
Seamless access to all business data
A consistent, known user interface
Reliable data
To avoid reentering data in multiple applications
Integration Approaches
1.
2.
3.
4.
5.
User Interface
Business Logic
Raw Data
Siebel Application
External
Application
User Interface
Business Logic
Raw Data
ation
Manager
Object Interfaces Workflow for EAI
EAI Dispatch Service
Virtual Business Components
eBusiness Connectors
Enterprise Integration
Siebel
EAI
Workflow
for EAI
Dispatch
Service
eBusiness
Connectors
VBCs
EIM
Object
Interfaces
Business services:
Map, transform, and transport data between applications
Implement both pre-built methods and custom scripting
Workflows:
Connect business services and other elements in sequences
Run in the object manager
eBusiness Connectors
Provide end-to-end integration between Siebel Applications and other
applications like Oracle and SAP R/3
Example: Exchange orders between Siebel front-office and
SAP R/3 back-office applications
Example: Each week the application captures mainframe updates and runs
a batch job to synchronize the Siebel account data
Object Interfaces
Expose Siebel objects to programmatic access from Siebel Visual Basic scripts,
eScripts, or external applications
Enable external applications to control the Siebel application or access the Siebel
database using:
COM Servers: Automation Server, Data Server
CORBA Object Manager
Java Data Bean
Example: A button in an Excel
spreadsheet calls the Siebel
COM Data Server to update
Siebel contact data from
Excel values
EIM
Object Interfaces
Less abstract
Workflow
More abstract
Data volume
Object Interfaces
Workflow
100 Records
EIM
1,000,000 Records
Immediacy
EIM
Overnight
Workflow
Seconds
Object Interfaces
Very fast
Siebel
EAI
Workflow
for EAI
Dispatch
Service
eBusiness
Connectors
VBCs
EIM
Object
Interfaces
Consider:
Consider:
How you want to control the application
Do you want to initiate processing or direct the user interface?
The importance of the Siebel application user context
Which view is active?
Consider:
How often does this need to occur?
What is the volume of data?
Is this outbound only?
Is real-time or batch processing preferred?
Business Services
Are the main building blocks of Siebel workflows
Contains prebuilt Siebel methods (global procedures)
Written in C++
Can also contain custom scripts
Written in eScript or Visual Basic
Analogy: a calculator
Property Set
Business services methods receive and send data instances in property sets
Method
A workflow process
A method from another business service
A user interface event
A Siebel object interface (COM, CORBA, Java)
A built-in script
An external program
# Business Service
Source
Destination
2-Way
2
3
4
5
6
7
8
2-Way
2-Way
2-Way
2-Way
Inbound
Outbound
2-Way
9
10
11
Siebel
Property Set
Database
EAI Data Mapper
Property Set Property Set
XML Converter
Property Set XML (Stream)
XML Hierarchy Converter Property Set XML (Stream)
EAI XML Converter
Property Set XML (Stream)
EAI XML Read From File XML (File)
Property Set
EAI XML Write To File Property Set XML (File)
EAI File Transport
XML (File)
XML (File)
Adapter
EAI MQSeries Transport XML (Stream) MQSeries
Adapter
Queue
EAI MSMQ Transport
XML (Stream) MSMQ Queue
Adapter
EAI HTTP Transport
XML (Stream) HTTP Port
Adapter
Direction
2-Way
2-Way
Outbound
Example: The XML Converter transforms Siebel data into XML that an external
application can process
Pre-built adapters include:
EAI XML Converter
XML Hierarchy Converter
XML Converter
EAI Siebel Adapter
EAI Data Mapping Engine
XML Gateway business service
Always set them to FALSE unless the field is used in script or for join or
workflow or EAI and the field is not present on UI.
Similarly PDQs filter or sort should not be based on non indexed columns
Search affects WHERE and sort affects ORDER BY clause of the SQL
False - Primary foreign key is NULL or invalid in the master record it performs a
secondary query and sets primary id to NoMatchRowId or the detail record row
id.
True - The application encounters a master record where the primary foreign key
is NULL, NoMatchRowId, or invalid, it performs a secondary query
Primary ID Field & Primary Join Property & Check No Match Property
Cloned object should have their Upgrade Ancestor property set to gain the
benefit of new functionality that may be applied to original object from which it
was cloned during a Siebel application upgrade
Required property
Set required field property to TRUE for the required fields instead of writing
code.
BC Read Only Field & Field Read Only Field:fieldname & Parent
Read Only Field
Cloning objects
Is strongly discouraged!
Errors occur after the upgrade due to C++ code in the objects class refers to a
column which does not exist in the custom object.
Field Validation
User Properties
Workflow
Personalization
Run-time Events
State Model
Have the Project team agree upon a standard way of naming variables so that the
scope and data type are identified easily. This significantly simplifies
maintenance and troubleshooting efforts.
Comment Code
Commenting code is a very good practice to explain the business purpose behind
the code.At the onset of the project , project teams should agree upon standard
commenting practice to ensure consistency , and simplify the maintenance effort.
Include a comment header at the top of the method with an explanation of the
code and revision history.Strictly maintain this headers so that they accurately
reflect the script that they describe.If you donot maintain them along the code ,
eventually they become confusing , misleading or incorrect.
One of the most common issues identified during script reviews is the
inappropriate use of object events. Placing code in the wrong event handler can
lead to altered data and may negatively impact performance.
exposes the calculated field , not the actual field to the user. For other frequently fired
events , look for a configuration alternative , and if none is available , make the script as
simple as possible. One alternative for complex calculations is to create a calculated field
that uses the InvokeServiceMethod function. This function allows you to call a business
service through a calculated field and use the output value of the business service as the
calculated field value. Note that you should not display this type of calculated field in a
list applet , due to the potential performance impact.
Include the Option Explicit in the <general> < declarations> section of every
object containing Siebel VB Code.
Option Explicit requires that you explicitly declare every variable. The compiler
returns an error if the Siebel application uses a variable before declaration; thus
Commented Out
Set to Inactive
Never Called
If you want to keep a record of obsolete code before removing it , you can do and export
from Siebel Tools to save a copy of the Script. To export a Script to a text file , open the
script editing window for the object in question , then choose File > Export. The Script
for all methods on that object will be exported to a file of type .js if the Script is written
in eScript , or .sbl if it is written in Siebel VB.
Alternatively , you can create an archive file , of file type .sif , with the object containing
the script. Archive files contain all property definitions for the object , whereas a .js or
.sbl file only contains the script.
Error
Handling in Siebel VB
RaiseErrorText methods generate a server script exception causing the script to stop
executing at that point.Therefore , it is important to place any code that must execute
before calls to these methods.
Catch Exceptions
Populates when the Siebel application encounters an error during runtime or when
the developer raises an exception using RaiseErrorText.
In Browser Script , top is a short cut to the top level document. Using the top
object , developers can write browser script function once and call it from
anywhere within the browser aspect of objects.
Note:- Scripted objects have a server side aspect which can only call server script
and a browser script aspect which can only call browser script. Thus the top
object , being a browser script object , can only be referenced from browser script.
This is useful for any function which needs to interact programmatically with a
client application or desktop , that would also need to be called from multiple
places in the application.
Current ( or UI ) context deals with objects that the Siebel application created to
support data that are currently available to the user
New ( or Non UI ) context is a set of objects instantiated in script that have no tie
to any objects or data that the user is currently viewing. Keeping these two
straight is important because the user may see programmatic manipulations of
data if you use the wrong context. For Example , consider a script running in any
event of the Contact Business Component that needs to get a reference to the
Contact business component to do a comparison or look up.
Id Field
Only use Activate field if an Execute Query statement succeeds it. As a standalone
statement , Activatefield will not implicitly activate a nonactivated field.ActivateField
tells the Siebel application to include this database column in the next SQL statement it
executes on the business component which just had a field activated.
View mode settings control the formation of the SQL Where clause that the
Siebel application sends to the database by using team or position visibility to
limit the records available in the business component queried in the script.
Setting a query to AllView visibility mode gives the user access to all records ,
which may be different than the view mode of the current view in the UI.For
example , a user may have SalesRep visibility on the UI whereas the script will be
giving the user All visibility. This would give the user access to records the user
might not need to access or should not be able to access.
In a loop be careful not to call a method more times than is necessary or the
number of method calls increases linearly with the number of loop iterations.
In the script below the number of method calls in the loop is 3n where n is
the number of iterations. Imagine where n = 100.
Since the conditional and fails as soon as any condition does not evaluate to
true if the value is not TRUE, then the entire if fails and the last condition is
not evaluated. If it is TRUE, then it will not be an empty string
automatically.
Here is how to fix this script to be more efficient. This goes from 4n method
calls to 2n method calls where n is the number of iterations. A savings of
50%!
Fields are queried multiple times for calculated fields, fields that have
their Force Active set to True, fields that have their Link Spec property set
to True and fields that are in the applet definitions.
The Order Entry Orders and Sync Period business components currently
have script in this event!
fieldname: Industry
fieldname: Main Fax Number
fieldname: Id
fieldname: Id
PreSetFieldValue
ChangeRecord
ShowControl
PreCanInvokeMethod
Setting a fields Required property to True should replace any script that
does this. In the ValidateUpdateContactTransaction method of the Contact
business component the following script is superfluous since the fields are
already set to required and the message given does not add to the message
the Siebel application already delivers by default.
if (LastName == "")
{
TheApplication().SetProfileAttr("AM05Pending", "N");
sMessageText += ((sMessageText=="") ? "" : "\n") + "\'Last Name\' is a required
field. Please enter a value for the field. ";
TheApplication().RaiseErrorText(sMessageText);
}
if (FirstName == "")
{
TheApplication().SetProfileAttr("AM05Pending", "N");
sMessageText += ((sMessageText=="") ? "" : "\n") + "\'First Name\' is a required
field. Please enter a value for the field. ";
TheApplication().RaiseErrorText(sMessageText);
}
The default message for a required field looks like the following. And since
both the Last Name and First Name fields come out of the box as required
script like this can be deleted.
function BusComp_PreNewRecord
/*==============================================================
===
-- Comments Section
Author:
Updated By: <name and date>
Description: Calls the DataSync_PreNewRecord() custom function.
===========================================================*/
try {
//Local Variable declarations
var iReturn = ContinueOperation;
var sExceptionMsg = "";
BusComp_PreSetFieldValue Example:
Code typically applies to a specific field
Perform field check before executing
The following is another example of how code is executing for every invocation of the
method without necessity. It is from the ExplicWriteRecord method of the Account Entry
Applet - no buttons object.
var psIn, psOut, sStatus;
var sText = "", sCode = "", sTransNumber;
try
{
//Template - variable declarations
psIn
= TheApplication().NewPropertySet();
psOut = TheApplication().NewPropertySet();
sStatus = this.BusComp().GetFieldValue ("Account Status");
//Justin Kraus 4/5/02 Added RunAccountTransaction Code
var RunAccountTransaction =
TheApplication().GetProfileAttr("RunAccountTransaction");
if (RunAccountTransaction == "YES")
{
if (sStatus == "New")
sTransNumber = "AM02";
else
sTransNumber = "AM03";
this.InvokeMethod(sTransNumber, psIn, psOut); //Custom - Transaction Number
sText = theApplication().GetProfileAttr ("DisplayText");
sCode = theApplication().GetProfileAttr ("DisplayCode");
This can be done where ever needed and there is no benefit to doing this. It
just introduces more code and slows performance.
A good example is the AreValueOffersOnOrOff method in the Account Entry
Applet - Read Only object.
This method has 44 lines to wrap a call to GetProfileAttr.
Use ActivateMultipleFields
ActivateMultipleFields/GetMultipleFieldValues/SetMultipleFieldValues are
new to Siebel 7 and can greatly reduce all the redundant lines.
//activate the fields using the property set which has the field names
lbc_account.ActivateMultipleFields(lPS_FieldNames);
lbc_account.ExecuteQuery(ForwardOnly);
if (lbc_account.FirstRecord())
{
//retrieve the values. This method acts sort of like a business service
//in that there is an input property set and an output property set. The
//field values will be in the second property set passed in.
lbc_account.GetMultipleFieldValues(lPS_FieldNames, lPS_FieldValues);
//loop through property set to get values.
ls_account_products = lPS_FieldValues.GetProperty("Account Products");
ls_agreement_name = lPS_FieldValues.GetProperty("Agreement Name");
ls_project_name
= lPS_FieldValues.GetProperty("Project Name");
ls_description
= lPS_FieldValues.GetProperty("Description");
ls_name
= lPS_FieldValues.GetProperty("Name");
Use the Associate method of the Associate business component returned from
GetAssocBusComp.
This method automatically and correctly creates a row in an intersection
table.
Often developers do this with script when the Associate method is the way to
go.
var lBC_mvg
= this.GetMVGBusComp(Sales Rep);
var lBC_associate = lBC_mvg.GetAssocBusComp();
with (lBC_associate)
{
ClearToQuery();
SetSearchSpec(Id, SomeRowId);
ExecuteQuery(ForwardOnly);
if (FirstRecord())
Associate(NewAfter);
Problem
A parent business component is queried, iterated through, and for each record a child
business component is queried
Solution
Justify the business requirement for it and avoid it as much as possible. Also, when
creating a parent child set of business components from the same business object,
querying the parent automatically queries the child so no separate query is necessary for
the child business component.
Example
bcOrder.ExecuteQuery( ForwardOnly );
if ( bcOrder.FirstRecord() )
{
do {
// ...
// Query "Order Entry - Orders/Order Entry - Line Items" in the
current BO
}
while ( bcOrder.NextRecord() );
}
Cache Data
Problem
Often customers execute the exact SQL statements from various locations in script. That
generates an excessive number of script api calls and a redundant number of business
component queries.
Solution
Cache a limited set of data within your script
Gain
1) Many script api calls removed
2) Redundant buscomp and SQL executions removed.
(!) Exceptions: if the data that needs to be cached is too complex or too large or too
dynamic.
Reuse objects
Problem
Some objects might be instantiated repeatedly in the same eScripts
Solution
Avoid excessive instantiation of Objects. Try to reuse existing instances of objects
instead of creating new ones
Exception
It is sometimes not possible to reuse objects, for instance UI related objects that could
result in UI refresh
Example
DON T:
var boAcc1 = TheApplication().GetBusObject( "Account" );
var boAcc2 = TheApplication().GetBusObject( "Account" );
DO:
var boAcc1 = TheApplication().GetBusObject( "Account" );
var boAcc2 = boAcc1;
Always run EIM process during off peak hours, if possible. This ensures that maximum
processing capacity is available for the EIM processes and also reduce the load for
connected users
After every EIM run ,check the status of the records being processed in the EIM tables using following query.
Set-based operations are faster then row-based operations because they process data
in sets of records rather than on row-by-row basis. Therefore, for initial load, set-based
processing should be selected. This can be done by setting SET BASED LOGGING to
TRUE.
Always delete batches from interface tables upon completion of EIM process. Leaving old
batches in the interface table wastes space and can adversely affect performance.
Complete testing is recommended to fine tune your EIM process. Run a large number of
identical EIM jobs with similar data. This allows to find out incorrect mapping the data and
also provides some insight into the optimal sizing of the EIM batches and exposure.
Based on the business requirements, Organizations should put the most heavily used
key EIM tables and their corresponding indexes on different physical disk from the Siebel
base tables and indexes , because all of them are accessed simultaneously during EIM
operations.
Identify the most time intensive SQL statements and create additional indexes that are
necessary to improve the performance of these long running SQL.
During initial EIM load, unnecessary indexes can be dropped, saving a significant
amount of time. Typically, for a target base table or parent tables (such as,
S_ORG_EXT) we need only the primary Index and the unique Indexes and for a non
target base table or child tables (such as, S_ADDR_ORG) we need only the primary
Index , the unique Indexes , and the foreign key Indexes. All the remaining indexes can
be dropped for the duration of the EIM import.
Excerpt from IFB using ONLY BASE TABLES or IGNORE BASE TABLES parameter
Excerpt from IFB using ONLY BASE columns or IGNORE BASE columns parameter
TRIM SPACES. This options is used for IMPORT process only. It specifies whether
the character columns in the interface tables should have trailing spaces removed before
importing. The default value is TRUE. This setting saves the vital disk space and buffer
pool space for the tablespace data.
INSERT ROWS This options allow to suppress inserts when the base table is already
fully loaded and the table is the primary table for an EIM interface table used to load and
update other tables. The command format is INSERT ROWS = <table name>, FALSE.
UPDATE ROWS This options allow to suppress updates when the base table is
already fully loaded and does not require updates such as foreign key additions, but the
table is the primary table for an EIM interface table used to load and update other tables.
The command format is UPDATE ROWS = <table name>, FALSE.
Excerpt from IFB using INSERT ROWS and UPDATE ROWS parameter
SQLPROFILE This options greatly simplify the task of identifying the most time
intensive SQL statements. It places the most time intensive SQL statements in the file
specified
Excerpt from IFB using SQLPROFILE parameter
Flag
SQL
Trace
Value
8
Information
Shows all SQL statements that make up the EIM task.
1,2,4
2
Error
Trace
1
2
4
8
32
in EIM
Below listed are the different permutations of above parameters, which can be helpful
tuning process
- Set the Error Flag=1, the SQL flag = 1, and the Trace Flag=1 at starting.
This setting will show errors and unused foreign keys.
- Set Error flag = 1, the SQL flag = 8, and the Trace flag = 3.
These settings will produce a log file with SQL statements that include how long each
statement took, which is very useful for optimizing SQL performance.
- Set the Error flag = 0, the SQL flag = 0, and the Trace flag = 1.
These settings will produce a log file showing how long each EIM step took, which is
useful when figuring out the optimal batch size as well as monitoring for deterioration
- Set the Error flag = 1, the SQL flag = 8 and the Trace flag = 1.
For all normal diagnostic purposes, it is recommended to use these values
during EIM process:
FILTER QUERY. This options is used for IMPORT process only. Using this option we
can specify a filter query that runs before the import process and eliminates all rows
which fails from further processing. The query expression should be a self-contained
WHERE clause expression without the WHERE keyword and should use only unqualified
column names from the interface table or literal values such as NAME IS NOT NULL.
Example of such query is FILTER QUERY=(ACCNT_NUM 1500)
USING SYNONYMS This parameter is used for checking account synonyms during
IMPORT process. The default setting is TRUE. When account synonyms are not needed,
set this parameter to FALSE .This saves processing time because queries that look up
account synonyms in S_ORG_SYN table are not used.
USE INDEX HINTS This parameter is applicable to Microsoft SQL Server and Oracle
database platforms only. The default setting is FALSE. This parameter controls whether
EIM issues hints to optimize the underlying database to improve performance and
throughput. Test EIM processing with both settings TRUE and FALSE to determine
which provides better performance for each of the respective EIM jobs.
BATCH SIZE The number of rows processed in a single batch is called batch size.
Though the optimal batch size to be use may vary depending upon the amount of buffer
cache available. To reduce demands on resources and improve performance, smaller
batch sizes should be preferred .Following points should be kept in mind before deciding
batch size
1. It should not be more than 100,000 rows.
2. For an initial load, it should be between 25, 000 to 30,000 rows. For ongoing loads, it
should be between 2,500 to 10,000 rows with Transaction Logging or 10,000 to
15,000 rows without Transaction Logging.
NUMBER
As far as possible try to divide EIM batches into Insert only transaction and update only
transactions. Following two IFB files demonstrates dividing EIM batches into Insert only
transaction and update only transactions.
BATCH RANGE Use Batch ranges in form of BATCH = xy. This will enable us to run
with smaller batch sizes and avoid the startup overhead on each batch. Though there is
a limit to maximum number of batches that can be run in a single EIM process i.e. 1,000
batches.
Excerpt from IFB using batch range
[Siebel Interface Manager]
USER NAME = "SADMIN"
PASSWORD = "SADMIN"
PROCESS = IMPORT EIM Type
[IMPORT EIM Type]
TYPE = IMPORT
BATCH = 1-10
TABLE = EIM_ACCOUNT
ONLY BASE TABLES = S_ORG_EXT
Import parents and children separately. Wherever possible, load data such as
Accounts, Addresses and Teams at the same time, using the same interface
table.
PARALLEL PROCESSING
FOR EIM JOBS THAT HAVE NO INTERFACE OR BASE TABLES IN COMMON, RUNNING
THEM IN
parallel can help increasing EIM throughput rate. No special setup is required for
this. But for concurrent EIM processes that run against the same interface table ,
the parallel EIM jobs must use different batch numbers or the database used
should support row level locking.
Running EIM tasks in parallel should be last option for EIM optimization as it
may cause a deadlock when multiple EIM processes access the same interface
table simultaneously. So before running tasks in parallel, check the value of the
Maximum Tasks parameter. This parameter specifies the maximum number of
running tasks that can be run at a time for EIM process
This parameter can be found in sever administration screen in Server Component
Parameters as shown below.
DISABLE TRIGGER
transactions into the S_DOCK_TXN_LOG table for the purpose of routing data to mobile
web clients. The default value is FALSE. If there are no mobile web clients, then the
default setting should remain. However even if there are mobile web clients, during initial
data loads, set this parameter to FALSE as with a large volume of data, it may take quite
a long time for the Transaction Processor and Router tasks to process the changes for
each of the Remote clients. It would be faster to extract the mobile clients after the data
has been loaded and assigned.
This value of this parameter can be changed from System Preferences screen as shown
below or by setting LOG TRANSACTIONS parameter in ifb file.
Vi. Precautions
Running EIM tasks in parallel should be last option for tuning EIM as it may cause a
deadlock when multiple EIM processes access the same interface table simultaneously
If you are using disabling database triggers for tuning EIM , reapply the triggers after
completing the EIM load.
If you have dropped unnecessary indexes during initial EIM load for tuning EIM, Then
remember to later create these indexes in batch the mode by utilizing parallel execution
strategies available for the respective database platform.
For mobile clients when running large EIM processes, after the data has been loaded and
assigned, turn on transaction logging and reextract the mobile clients.
After initial data loading is complete , if the architecture set up uses mobile web clients ,
EIM should be run in row-by-row operations This can be done by setting SET BASED
LOGGING to FALSE
Remove any PRIMARY KEYS ONLY parameters from EIM configuration file and avoid
using the UPDATE PRIMARY KEYS parameter
Set the UPDATE STATISTICS parameter to FALSE when running parallel EIM
processes on a DB2 database
If you plan to use multiple addresses for accounts do not set the USING SYNONYMS
parameter to FALSE as then EIM will not attach addresses to the appropriate accounts.
GLOSSARY
Acceptance Testing
Formal testing conducted to enable a user, customer or other authorized entity to determine whether to accept a
system or component.
Actual Outcome
The behavior actually produced when the object is tested under specified conditions.
Ad hoc testing
Testing carried out using no recognized test case design technique
Agents
Self contained processes that run in the background on a client or server and that perform useful functions for a
specific user/owner. Agents may monitor exceptions based on criteria or execute automated tasks.
Aggregated Data
Data that is results from applying a process to combine data elements. Data that is summarized.
Algorithm
A sequence of steps for solving a problem.
Alpha Testing
Simulated or actual operational testing at an in-house site not otherwise involved with the software developers
Application Portfolio
An information system containing key attributes of applications deployed in a company. Application portfolios are
used as tools to manage the business value of an application throughout its lifecycle.
Arc Testing
A test case design technique for a component in which test case are designed to execute branch condition
outcomes
Artificial Intelligence (AI)
The science of making machines do things which would require intelligence if they were done by humans.
Asset
Component of a business process. Assets can include people, accommodation, computer systems, networks,
paper records, fax machines, etc.
Attribute
A variable that takes on values that might be numeric, text, or logical (true/false). Attributes store the factual
knowledge in a knowledge base The use of software to perform or support test activities, e.g. test management,
test design, test execution and results checking.
Automation
There are many factors to consider when planning for software test automation. Automation changes the
complexion of testing and the test organization from design through implementation and test execution. There are
tangible and intangible elements and widely held myths about benefits and capabilities of test automation.
Availability
Ability of a component or service to perform its required function at a stated instant or over a stated period of time.
It is usually expressed as the availability ratio, i.e. the proportion of time that the service is actually available for
use by the Customers within the agreed service hours.
B
Backward chaining
The process of determining the value of a goal by looking for rules that can conclude the goal. Attributes in the
premise of such rules may be made sub goals for further search if necessary.
Balanced Scorecard
An aid to organizational performance management. It helps to focus, not only on the financial targets but also on
the internal processes, customers and learning and growth issues.
Baseline
A snapshot or a position which is recorded. Although the position may be updated later, the baseline remains
unchanged and available as a reference of the original state and as a comparison against the current position.
Basic Block
A sequence of one or more consecutive, executable statements containing no branches
Basic test set
A set of test cases derived from the code logic which ensure that 100% branch coverage is achieved
Behavior
The combination of input values and preconditions and the required response for a function of a system. The full
specification of a function would normally comprise one or more behaviors
Beta Testing
Operational testing at a site not otherwise involved with the software developers
Big-bang testing
Integration testing where no incremental testing takes place prior to all the system's components being combined
to form the system
Black Box Testing
Test case selection based on an analysis of the specification of the component without reference to its internal
workings
Blackboard
A hierarchically organized database which allows information to flow both in and out from the knowledge sources.
Bottom up testing
An approach to integration testing where the lowest level components are tested first, then used to facilitate the
testing of higher level components.
Branch
A conditional transfer of control from any statement to any other statement in a component, or an unconditional
transfer of control from any statement to any other statement in the component except the next statement, or
Branch Condition Combination Testing
A test case design technique in which test cases are design to execute combinations of branch condition outcomes
C
CAST
Computer Aided Software Testing
Capture / Playback Tool
A test tool which records test input as it is sent to the software under test. The input cases stored can than be used
to design test cases
Case-Based Reasoning (CBR)
A problem-solving system that relies on stored representations of previously solved problems and their solutions.
Cause Effect Graph
A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used
to design test cases
Cause Effect Graphing
A test case design technique in which test cases are designed by consideration of cause effect graphs
Certainty processing
Allowing confidence levels obtained from user input and rule conclusions to be combined to increase the overall
confidence in the value assigned to an attribute.
Certification
The process of confirming that a system or component complies with its specified requirements and is acceptable
for operational use
Change Control
The procedure to ensure that all changes are controlled, including the submission, analysis, decision making,
approval, implementation and post implementation of the change.
Charter
A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing.
Classes
A category that generally describes a group of more specific items or objects.
Clause
One expression in the If (premise) or Then (consequent) part of a rule. Often consists of an attribute name followed
by a relational operator and an attribute value.
Code Coverage
An analysis method that determines which parts have not been executed and therefore may require additional
attention
Code based testing
Designing tests based on objectives derived from implementation such as tests that execute specific control flow
paths or use specific data items
Compatibility testing
Testing whether the system is compatible with other systems with which it should communicate
Component
A minimal software item for which a separate specification is available
Component Testing
The testing of individual software components
Conclusion / Consequent
The Then part of a rule, or one clause or expression in this part of the rule.
Condition
A Boolean expression containing no Boolean operators.
Condition Outcome
The evaluation of a condition to TRUE or FALSE
Confidence / Certainty factor
A measure of the confidence assigned to the value of an attribute. Often expressed as a percentage (0 to 100%) or
probability (0 to 1.0). 100% or 1.0 implies that the attribute's value is known with certainty.
Configuration Item (CI)
Component of an infrastructure - or an item, such as a Request For Change, associated with an infrastructure that is (or is to be) under the control of Configuration Management. It may vary widely in complexity, size and type,
from an entire system (including all hardware, software and documentation) to a single module or a minor
hardware component.
Configuration Management
The process of identifying and defining Configuration Items in a system.
E.g. recording and reporting the status of Configuration Items and Requests For Change, and verifying the
completeness and correctness of Configuration Items.
Conformance Criterion
Some method of judging whether or not the component's action on a particular specified input value conforms to
the specification
Conformance Testing
The process of testing that an implementation conforms to the specification on which it is based
Contingency Planning
Planning to address unwanted occurrences that may happen at a later time. Traditionally, the term has been used
to refer to planning for the recovery of IT systems rather than entire business processes.
Control Flow
An abstract representation of all possible sequences of events in a program's execution
Control Flow Graph
The diagrammatic representation of the possible alternative control flow paths through a component
Control information
Elements of a knowledge base other than the attributes and rules that control the user interface, operation of the
inference engine and general strategies employed in implementing a consultation with an expert system.
Conversion Testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems
Correctness
The degree to which software conforms to its specification
Coverage
The degree, expressed as a percentage, to which a specific coverage item has been exercised by a test case suite
Coverage Item
D
Data Definition
An executable statement where a variable is assigned a value
Data Definition C-use coverage
The percentage of data definition C-use pairs in a component exercised by a test case suite
Data Definition C-use pair
A data definition and computation data use, where the data use uses the value defined in the data definition
Data Dictionary
A database about data and database structures. A catalog of all data elements, containing their names, structures,
and information about their usage.
Data Flow Testing
Testing in which test cases are designed based on variable usage within the code
Data Mining
Extraction of useful information from data sets. Data mining serves to find information that is hidden within the
available data.
Data Use
En executable statement where the value of a variable is accessed
Debugging
The process of finding and removing the causes of failures in software
Decision
A program point at which the control flow has two or more alternative routes. The choice of one from among a
number of alternatives; a statement indicating a commitment to a specific course of action.
Decision Condition
A condition within a decision
Decision Coverage
The percentage of decision outcomes exercised by a test case suite
Decision Outcome
The result of a decision which therefore determines the control flow alternative taken
Delta Release
A delta, or partial, Release is one that includes only those CIs within the Release unit that have actually changed
or are new since the last full or delta Release. For example, if the Release unit is the program, a Delta Release
contains only those modules that have changed, or are new, since the last Full Release of the program or the last
delta Release of certain modules.
Depth first search.
A search strategy that backtracks through all of the rules in a knowledge base that could lead to determining the
value of the attribute that is the current goal or sub goal.
Descriptive Model
Physical, conceptual or mathematical models that describe situations as they are or as they actually appear.
Design Based Testing
Designing tests based on objectives derived from the architectural or detail design of the software. This is excellent
for testing the worst case behavior of algorithms
Design specification
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and
identifying the associated high level test cases.
Desk Checking
The testing of software by the manual simulation of its execution
Deterministic Model
Mathematical models that are constructed for a condition of assumed certainty. The models assume there is only
one possible result (which is known) for each alternative course or action.
Dirty Testing
Testing which demonstrates that the system under test does not work
Domain Expert
A person who has expertise in a specific problem domain.
Downtime
Total period that a service or component is not operational, within an agreed service times.
E
Error
A human action producing an incorrect result
Error Guessing
A test case design technique where the experience of the tester is used to postulate which faults might occur and
to design tests specifically to expose them
Evaluation report
A document produced at the end of the test process summarizing all testing activities and results. It also contains
an evaluation of the test process and lessons learned.
Executable statement
A statement which, when compiled, is translated into object code, which will be executed procedurally when the
program is running and may perform an action on program data.
Exhaustive testing
The last executable statement within a component.
Exit point
The last executable statement within a component.
Expected outcome
The behavior predicted by the specification of an object under specified conditions.
Expert system
A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the
knowledge base to respond to a user's request for advice.
Expertise
Specialized domain knowledge, skills, tricks, shortcuts and rules-of-thumb that provide an ability to rapidly and
effectively solve problems in the problem domain.
F
Failure
A fault, if encountered, may cause a failure, which is a deviation of the software from its expected delivery or
service
Fault
A manifestation of an error in software (also known as a defect or a bug)
Firing a rule
A rule fires when the if part (premise) is proven to be true. If the rule incorporates an else component, the rule also
fires when the if part is proven to be false.
Fit for purpose testing
Validation carried out to demonstrate that the delivered system can be used to carry out the tasks for which it was
acquired.
Forward chaining
Applying a set of previously determined facts to the rules in a knowledge base to see if any of them will fire.
Full Release
All components of the Release unit that are built, tested, distributed and implemented together. See also delta
Release.
Functional specification
The document that describes in detail the characteristics of the product with regard to its intended capability.
Fuzzy variables and fuzzy logic
Variables that take on multiple values with various levels of certainty and the techniques for reasoning with such
variables.
G
Genetic Algorithms
Search procedures that use the mechanics of natural selection and natural genetics. It uses evolutionary
techniques, based on function optimization and artificial intelligence, to develop a solution.
Geographic Information Systems (GIS)
A support system which represents data using maps.
Glass box testing
Testing based on an analysis of the internal structure of the component or system.
Goal
A designated attribute: determining the values of one or more goal attributes is the objective of interacting with a
rule based expert system.
The solution that the program is trying to reach.
Goal directed
The process of determining the value of a goal by looking for rules that can conclude the goal. Attributes in the
premise of such rules may be made sub goals for further search if necessary.
Graphical User Interface (GUI)
A type of display format that enables the user to choose commands, start programs, and see lists of files and other
options by pointing to pictorial representations (icons) and lists of menu items on the screen.
H
Harness
A test environment comprised of stubs and drivers needed to conduct a test.
Heuristics
The informal, judgmental knowledge of an application area that constitutes the "rules of good judgment" in the
field.
Heuristics also encompass the knowledge of how to solve problems efficiently and effectively, how to plan steps in
solving a complex problem, how to improve performance, etc
I
ITIL
IT Infrastructure Library (ITIL) is a consistent and comprehensive documentation of best practice for IT Service
Management.
ITIL consists of a series of books giving guidance on the provision of quality IT services, and on the
accommodation and environmental facilities needed to support IT.
Inference
New knowledge inferred from existing facts.
Inference engine
Software that provides the reasoning mechanism in an expert system. In a rule based expert system, typically
implements forward chaining and backward chaining strategies.
Infrastructure
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office
environment and procedures.
Inheritance
The ability of a class to pass on characteristics and data to its descendants.
Input
A variable (whether stored within a component or outside it) which is read by the component
Input Domain
The set of all possible inputs
Input Value
An instance of an input
Integration testing
Testing performed to expose faults in the interfaces and in the interaction between integrated components.
Intelligent Agent
Software that is given a particular mission carries out that mission, and then reports back to the user.
Interface testing
Integration testing where the interfaces between system components are tested.
Isolation testing
Component testing of individual components in isolation from surrounding components, with surrounding
components being simulated by stubs.
Item
The individual element to be tested. There usually is one test object and many test items.
K
KBS
Knowledge-Based System.
Key Performance Indicator
The measurable quantities against which specific Performance Criteria can be set.
Knowledge
Knowledge refers to what one knows and understands.
Knowledge Acquisition
The gathering of expertise from a human expert for entry into an expert system.
Knowledge Representation
The notation or formalism used for coding the knowledge to be stored in a knowledge-based system.
Knowledge base
The encoded knowledge for an expert system. In a rule-based expert system, a knowledge base typically
incorporates definitions of attributes and rules along with control information.
Knowledge engineering
The process of codifying an expert's knowledge in a form that can be accessed through an expert system.
Knowledge-Based System
A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the
knowledge base to respond to a user's request for advice.
Known Error
An incident or problem for which the root cause is known and for which a temporary Work-around or a permanent
alternative has been identified
L
Lifecycle
A series of states connected by allowable transitions.
Linear Programming
A mathematical model for optimal solution of resource allocation problems.
Log
A chronological record of relevant details about the execution of tests.
Logging
The process of recording information about tests executed into a test log.
M
Maintainability
The ease with which the system/software can be modified to correct faults, modified to meet new requirements,
modified to make future maintenance easier, or adapted to a changed environment.
Maintainability testing
Testing to determine whether the system/software meets the specified maintainability requirements.
Management
The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.
Metric
Measurable element of a service process or function.
Mutation analysis
A method to determine test case suite thoroughness by measuring the extent to which a test case suite can
discriminate the program from slight variants (mutants) of the program.
N
N-Transitions
A sequence of N+1 transitions
Natural Language Processing (NLP)
A Computer system to analyze, understand and generate natural human-languages.
Negative Testing
Testing which demonstrates that the system under test does not work
Neural Network
A system modeled after the neurons (nerve cells) in a biological nervous system. A neural network is designed as
an interconnected system of processing elements, each with a limited number of inputs and outputs. Rather than
being programmed, these systems learn to recognize patterns.
Non Functional Requirements Testing.
Testing of those requirements that do not relate to functionality. I.e. performance or usability
Normalization
The process of reducing a complex data structure into its simplest, most stable structure. In general, the process
entails the removal of redundant attributes, keys, and relationships from a conceptual data model.
O
Object
A software structure which represents an identifiable item that has a well-defined role in a problem domain.
Object Oriented
An adjective applied to any system or language that supports the use of objects.
Objective
A reason or purpose for designing and executing a test.
Operational Testing
Testing conducted to evaluate a system or component in its operational environment
Oracle
A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test
Outcome
The actual or predicted outcome of a test
Output
A variable whether stored within a component or outside it which is written to by the component
Output Domain
The set of all possible outputs
Output Value
An instance of an output
P
P-use
A data use in a predicate
PRINCE2
Projects in Controlled Environments, is a project management method covering the organization, management and
control of projects. PRINCE2 is often used in the UK for all types of projects.
Page Fault
A program interruption that occurs when a page that is marked not in real memory is referred to by an active
page.
Pair programming
A software development approach whereby lines of code (production and/or test) of a component are written by
two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.
Pair testing
Two testers work together to find defects. Typically, they share one computer and trade control of it while testing.
Partition Testing
A test case design technique for a component in which test cases are designed to execute representatives from
equivalence classes
Pass
A test is deemed to pass if its actual result matches its expected result.
Pass/fail criteria
Decision rules used to determine whether a test item (function) or feature has passed or failed a test.
Path
A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.
Path coverage
The percentage of paths that have been exercised by a test suite.
Path sensitizing
Predicted outcome
The behavior predicted by the specification of an object under specified conditions.
Priority
The level of (business) importance assigned to an item, e.g. defect.
Problem Domain
A specific problem environment for which knowledge is captured in a knowledge base.
Problem Management
Process that minimizes the effect on Customer(s) of defects in services and within the infrastructure, human errors
and external events.
Process
A set of interrelated activities, which transform inputs into outputs.
Process cycle test
A black box test design technique in which test cases are designed to execute business procedures and
processes.
Production rule
Rules are called production rules because new information is produced when the rule fires.
Project
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken an objective
conforming to specific requirements, including the constraints of time, cost and resources.
Project test plan
A test plan that typically addresses multiple test levels.
Prototyping
A strategy in system development in which a scaled down system or portion of a system is constructed in a short
time, tested, and improved in several iterations.
Pseudo-random
A series which appears to be random but is in fact generated according to some prearranged sequence.
Q
Quality
The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied
needs.
Quality assurance
Part of quality management focused on providing confidence that quality requirements will be fulfilled.
Quality attribute
A feature or characteristic that affects an items quality.
Quality management
Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard
to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality
control, quality assurance and quality improvement.
Query
Generically query means question. Usually it refers to a complex SQL SELECT statement.
Queuing Time
Queuing time is incurred when the device, which a program wishes to use, is already busy. The program therefore
has to wait in a queue to
Obtain service from that device.
R
ROI
The return on investment (ROI) is usually computed as the benefits derived divided by the investments made. If we
are starting a fresh project, we might compute the value of testing and divide by the cost of the testing to compute
the return.
Random testing
A black box test design technique where test cases are selected, possibly using a pseudo-random generation
algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as
reliability and performance.
Re-testing
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective
actions.
Recoverability
The capability of the software product to re-establish a specified level of performance and recover the data directly
affected in case of failure.
Regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or
uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software
or its environment is changed.
Relational operator
Conditions such as is equal to or is less than that link an attribute name with an attribute value in a rule's premise
to form logical expressions that can be evaluated as true or false.
Release note
A document identifying test items, their configuration, current status and other delivery information delivered by
development to testing, and possibly other stakeholders, at the start of a test execution phase.
Reliability
Is the probability that software will not cause the failure of a system for a specified time under specified conditions
Repeatability
An attribute of a test indicating whether the same results are produced each time the test is executed.
Replace ability
The capability of the software product to be used in place of another specified software product for the same
purpose in the same environment.
Requirements
The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently
test cases) and execution of tests to determine whether the requirements have been met.
Requirements-based testing
An approach to testing in which test cases are designed based on test objectives and test conditions derived from
requirements.
e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.
Resource utilization
The capability of the software product to use appropriate amounts and types of resources. For example the
amounts of main and secondary memory used by the program and the sizes of required temporary or overflow
files, when the software performs its function under stated conditions.
Result
The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and
communication messages sent out.
Resumption criteria
The testing activities that must be repeated when testing is re-started after a suspension.
Review
A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an
input document for the test process.
Reviewer
The person involved in the review who shall identify and describe anomalies in the product or project under review.
Reviewers can be chosen to represent different viewpoints and roles in the review process.
Risk
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
Risk management
Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling
risk.
Robustness
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful
environmental conditions.
Root cause
An underlying factor that caused a non-conformance and possibly should be permanently eliminated through
process improvement.
Rule
A statement of the form: if X then Y else Z. The if part is the rule premise, and the then part is the consequent. The
else component of the consequent is optional. The rule fires when the, if part is determined to be true or false.
Rule Base
The encoded knowledge for an expert system. In a rule-based expert system, a knowledge base typically
incorporates definitions of attributes and rules along with control information.
S
Safety testing
The process of testing to determine the safety of a software product.
Scalability
The capability of the software product to be upgraded to accommodate increased loads.
Schedule
A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in
their context and in the order in which they are to be executed.
Scribe
The person who has to record each defect mentioned and any suggestions for improvement during a review
meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.
Script
Commonly used to refer to a test procedure specification, especially an automated one.
Security
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or
deliberate, to programs and data.
Severity
The degree of impact that a defect has on the development or operation of a component or system.
Simulation
The representation of selected behavioral characteristics of one physical or abstract system by another system.
Using a model to mimic a process.
Simulator
A device, computer program or system used during testing, which behaves or operates like a given system when
provided with a set of controlled inputs.
Smoke test
A subset of all defined/planned test cases that cover the main functionality of a component or system, to
ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and
smoke test is among industry best practices.
Stability
The capability of the software product to avoid unexpected effects from modifications in the software.
State diagram
A diagram that depicts the states that a component or system can assume, and shows the events or
circumstances that cause and/or result from a change from one state to another.
State table
A grid showing the resulting transitions for each state combined with each possible event, showing both valid and
invalid transitions.
State transition testing
A black box test design technique in which test cases are designed to execute valid and invalid state transitions.
Statement
An entity in a programming language, which is typically the smallest indivisible unit of execution.
Statement coverage
The percentage of executable statements that have been exercised by a test suite.
Statement testing
A white box test design technique in which test cases are designed to execute statements.
Static analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.
Statistical testing
A test design technique in which a model of the statistical distribution of the input is used to construct
representative test cases.
Strategy
A high-level document defining the test levels to be performed and the testing within those levels for a programme
(one or more projects).
Stress testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
Structural coverage
Coverage measures based on the internal structure of the component.
Stub
A skeletal or special-purpose implementation of a software component, used to develop or test a component that
calls or is otherwise dependent on it. It replaces a called component.
Sub goal
An attribute which becomes a temporary intermediate goal for the inference engine. Subgoal values need to be
determined because they are used in the premise of rules that can determine higher level goals.
Suitability
The capability of the software product to provide an appropriate set of functions for specified tasks and user
objectives
Suspension criteria
The criteria used to (temporarily) stop all or a portion of the testing activities on the test items.
Symbolic Processing
Use of symbols, rather than numbers, combined with rules-of-thumb (or heuristics), in order to process information
and solve problems.
Syntax testing
A black box test design technique in which test cases are designed based upon the definition of the input domain
and/or output domain.
System
A collection of components organized to accomplish a specific function or set of functions.
System integration testing
Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data
Interchange, Internet).
T
Technical Review
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A
technical review is also known as a peer review.
Test Case
A specific set of test data along with expected results for a particular test condition
Test Maturity Model (TMM)
A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that
describes the key elements of an effective test process.
Test Process Improvement (TPI)
A continuous framework for test process improvement that describes the key elements of an effective test process,
especially targeted at system testing and acceptance testing.
Test approach
The implementation of the test strategy for a specific project. It typically includes the decisions made that follow
based on the (test) projects goal and the risk assessment carried out, starting points regarding the test process
and the test design techniques to be applied.
Test automation
The use of software to perform or support test activities, e.g. test management, test design, test execution and
results checking.
There are many factors to consider when planning for software test automation. Automation changes the
complexion of testing and the test organization from design through implementation and test execution.
There are tangible and intangible elements and widely held myths about benefits and capabilities of test
automation.
Test case specification
A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution
preconditions) for a test item.
Test charter
A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing.
Test comparator
A test tool to perform automated test comparison.
Test comparison
The process of identifying differences between the actual results produced by the component or system under test
and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison)
or after test execution.
Test condition
An item or event of a component or system that could be verified by one or more test cases, e.g. a function,
transaction, quality attribute, or structural element.
Test data
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the
component or system under test.
Test data preparation tool
A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and
edited for use in testing.
Test design specification
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and
identifying the associated high level test cases.
Test design tool
A tool that supports the test design activity by generating test inputs from a specification that may be held in a
CASE tool repository.
E.g. requirements management tool, or from specified test conditions held in the tool itself.
Test environment
An environment containing hardware, instrumentation, simulators, software tools, and other support elements
needed to conduct a test.
Test evaluation report
A document produced at the end of the test process summarizing all testing activities and results. It also contains
an evaluation of the test process and lessons learned.
Test execution
The process of running a test by the component or system under test, producing actual result(s).
Test execution phase
The period of time in a software development life cycle during which the components of a software product are
executed, and the software product is evaluated to determine whether or not requirements have been satisfied.
Test execution schedule
A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in
their context and in the order in which they are to be executed.
Test execution technique
The method used to perform the actual test execution, either manually or automated.
Test execution tool
A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback.
Test harness
A test environment comprised of stubs and drivers needed to conduct a test.
Test infrastructure
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office
environment and procedures.
Test item
The individual element to be tested. There usually is one test object and many test items.
Test level
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a
project.
Examples of test levels are component test, integration test, system test and acceptance test.
Test log
A chronological record of relevant details about the execution of tests.
Test manager
The person responsible for testing and evaluating a test object. The individual, who directs, controls, administers
plans and regulates the evaluation of a test object.
Test object
The component or system to be tested.
Test point analysis (TPA)
A formula based test estimation method based on function point analysis.
Test procedure specification
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test
script.
Test process
The fundamental test process comprises planning, specification, execution, recording and checking for completion.
Test run
Execution of a test on a specific version of the test object.
Test specification
A document that consists of a test design specification, test case specification and/or test procedure specification.
Test strategy
A high-level document defining the test levels to be performed and the testing within those levels for a programme
(one or more projects).
Test suite
A set of several test cases for a component or system under test, where the post condition of one test is often used
as the precondition for the next one.
Test type
A group of test activities aimed at testing a component or system regarding one or more interrelated quality
attributes.
A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may
take place on one or more test levels or test phases.
Testability
The capability of the software product to enable modified software to be tested.
Tester
A technically skilled professional who is involved in the testing of a component or system.
Testing
The process of exercising software to verify that it satisfies specified requirements and to detect faults
Thread testing
A version of component integration testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by levels of a
hierarchy.
Top-down testing
An incremental approach to integration testing where the component at the top of the component hierarchy is
tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components.
Traceability
The ability to identify related items in documentation and software, such as requirements with associated tests.
See also horizontal traceability, vertical traceability
U
Understandability
The capability of the software product to enable the user to understand whether the software is suitable, and how it
can be used for particular tasks and conditions of use.
Unit testing
The testing of individual software components
Unreachable code
Code that cannot be reached and therefore is impossible to execute.
Unstructured Decisions
This type of decision situation is complex and no standard solutions exist for resolving the situation. Some or all of
the structural elements of the decision situation are undefined, ill-defined or unknown.
Usability
The capability of the software to be understood, earned, used and attractive to the user when used under specified
conditions.
Use case testing
A black box test design technique in which test cases are designed to execute user scenarios.
User acceptance testing
Formal testing conducted to enable a user, customer or other authorised entity to determine whether to accept a
system or component.
User test
A test whereby real-life users are involved to evaluate the usability of a component or system.
User-Friendly
An evaluative term for a System's user interface. The phrase indicates that users judge the user interface as to
easy to learn, understand, and use.
V
V model
describes how inspection and testing activities can occur in parallel with other activities
Validation
Correctness: Determination of the correctness of the products of software development with respects to the user
needs and requirements
Verification
Completeness: The process of evaluating a system or component to determine whether the products of the given
development phase satisfy the conditions imposed at the start of that phase
Version Identifier
A version number; version date, or version date and time stamp.
Volume Testing
Testing where the system is subjected to large volumes of data.
W
Walkthrough
A step-by-step presentation by the author of a document in order to gather information and to establish a common
understanding of its content.
Waterline
The lowest level of detail relevant to the Customer.
What If Analysis
The capability of "asking" the software package what the effect will be of changing some of the input data or
independent variables.
White box test design technique. Documented procedure to derive and select test cases based on an analysis of
the internal structure of a component or system. Documented procedure to derive and select test cases based on
an analysis of the internal structure of a component or system. Testing based on an analysis of the internal
structure of the component or system.
Workaround
Method of avoiding an incident or problem, either from a temporary fix or from a technique that means the
Customer is not reliant on a particular aspect of a service that is known to have a problem.
X
XML
Extensible Markup Language. XML is a set of rules for designing text formats that let you structure your data.
XML makes it easy for a computer to generate data, read data, and ensure that the data structure is unambiguous.
XML avoids common pitfalls in language design: it is extensible, platform-independent, and it supports
internationalization and localization.
Version
Name
Sign-Off List
Name
Position
Review List
In addition to the above:
Name
Position
Distribution List
Name
Position
Related Documentation
Ref
Title
Author
Version
1.
2.
3.
4.
Open Issues
Ref
Title
Author
Ref
Title
Author
Outstanding Areas
Title
Person responsible
CONTENTS
1
Introduction 441
1.1 Document Purpose 441
1.2 Document Scope
441
1.3 Test Focus
441
1.4 Test Objectives
441
1.5 Dependencies and Assumptions
442
1.6 Testing Coverage & Traceability
442
2
Approach
443
2.1 Test Collateral 443
2.2 Data Requirements 443
2.3 Confidence Testing 443
2.4 Regression in System Test phase
444
2.5 Regression in System Integration Test phase 444
2.6 Performance Testing 445
3
Regression Test scope 446
3.1 Location of Test Scripts
446
3.2 System Test Regression Summary 446
3.3 Core Siebel CRM 4.3 Functions/Features to be tested during ST
3.4 System Integration Test Regression Summary
447
3.5 Core Siebel CRM 4.3 Functions/Features to be tested during SIT
3.6 Legacy Interface Functions/Features to be tested
450
3.7 Performance Testing 451
3.8 Functions/Features not to be tested 451
3.9 Entry and Exit Criteria
451
3.10 Suspension and Resumption Criteria 452
4
Testing tools and Techniques 453
5
Test Plan and Schedule
454
5.1 Resource Requirements
454
5.2 Test Plan
454
6
Risks and COntingencies
455
7
Appendix
456
7.1 Legacy Interface Explanation 456
7.2 Confidence Test Checklist 460
7.3 Performance KPIs
461
446
447
Introduction
Document Purpose
The purpose of this document is to provide a comprehensive description of the scope,
tasks and responsibilities of Regression Testing during System (ST) and System
Integration (SIT) phases of CRM 4.4.
Document Scope
The scope of this document is the regression testing activities to be planned and executed
by the IBM/IGSI CRM 4.4 test team. The document will describe the exiting (CRM 4.3)
functionality that will be proved by the regression test pack.
The document will not cover the areas of the CRM 4.3 solution that have been changed
as part of CRM 4.4 as it is concerned with validating areas of existing functionality and
not testing new areas. These areas will be covered by the ST and SIT Detailed Test Plans.
The regression plan supports both the CRM 4.4 Quality Plan overall GCAP Test strategy
by providing Touch and Pipe test coverage of Siebel and Legacy Interface
functionality during System Test and System Integration Test phases, thereby providing a
robust platform for detailed regression and End to End testing performed by Allied in the
post-SIT test phase.
Test Focus
The regression phase for CRM 4.4 will be focussed on proving the 4.3 functionality that
has not been changed as part of the latest release. The focus of the 4.4 regression pack
will be on three groups of tests: Core 4.3 Siebel functionality; Legacy Interface tests;
Basic Performance Tests.
Test Objectives
The test objectives for the regression test phases can be described as follows;
To verify that existing functionality continues to work as specified. More
specifically to;
o Prove unchanged 4.3 functionality works and is stable
o Prove integration with legacy systems works and is stable
o Provide a robust basis for subsequent regression testing
Dependency 1 The System Test environment will be available for limited Touch
Testing targeted at key regression areas.
Dependency 2 - The Assembly Blue environment will be available for SIT with the
requisite levels of integration between Siebel and legacy, in order to run all the regression
tests.
Dependency 3 - Test data including will have been set up in the system to allow the
execution of the regression tests.
Approach
Test Collateral
The initial set of regression tests have been from those prepared for the SIT phase of the CRM
4.3 release which themselves were based on previous Reuters releases. This pack is
supplemented by a subset of tests covering enhancements introduced in CRM 4.3, as prioritised
by IGSI.
For the build of the regression pack a summary of CRM functional areas was used to confirm that
tests covered all major areas of functionality. See Sections 3.5 & 3.6 for functional areas. In total
around 45% of the enhancement tests for CRM 4.3 are being used to create the Regression test
pack for CRM 4.4. See Appendix for matrix of included/excluded tests.
In addition to the full regression set, a subset of high priority, rapidly executable confidence tests
will be identified to be run after every release to the test environments to ensure that basic
processes have not been impacted by the new code.
Eventually the regression test pack will incorporate tests which cover new areas of functionality
that have been tested during SIT to build a pack that can be used as the basis for a regression in
subsequent phases.
Data Requirements
The plan is for initial limited regression testing to be performed during the System Test
phase in the System Test environment, followed by broader regression coverage during
System Integration Test phase in the Assembly Blue environment. A recent cut of
production data will need to exist in both environments in order to run these tests. Some
tests will require new data to be created by the tester. Those regression tests being used
which require this will either have specific instructions within the test regarding the
creation of data.
It is essential that in both environments a selection of User Ids are unlocked and available
for testing. It would be advantageous if these were dedicated test Ids with a variety of
Service, Sales and Client Training responsibilities. At least one Administrator Id and one
Id that has been added to the State Model are also required.
Confidence Testing
Experience has shown that confidence testing of each release has great benefits in terms
of identifying any problems with a drop of new code at the earliest opportunity. As well
as reducing turnaround times for defect resolution, the impact on other environment users
is minimised. Confidence Testing will consist of a checklist (see appendix) of
streamlined, rapidly executable Siebel and Integration tests developed during previous
Reuters releases. Simplified versions of tests for new functionality can be added to this as
the development cycle progresses to broaden the coverage of the confidence test pack as
required.
The Assembly Blue environment enables testing of the complete set of legacy Siebel
Interfaces. Regression testing of these during SIT will focus on basic connectivity tests
and some basic End to End scenarios. This exercise requires a good deal of forward
planning and co-ordination with third parties within the Reuters organisation and as such
is identified as a risk area / high priority for the early stages of SIT.
The principle of confidence testing each release established during ST will be continued
during SIT. By this stage it is anticipated that the Confidence Test pack will incorporate
core 4.4 enhancements as well as core regression, whilst remaining a streamlined and
rapidly executable resource. In this way a reliable overview of the system can be made
quickly at each release.
Performance Testing
The GCAP Test Strategy provides for Limited Performance Testing during both ST
and SIT, which is most logically incorporated into the Regression Test Plan. It is
suggested however that performance testing of this kind is only of true value in the fully
integrated Assembly Blue environment with the full build present. As such, testing will
consist of timed touch tests of various operations against benchmarks run as a single
iteration in SIT when regression testing is complete.
Function
Ability to Create and & manage an
Opportunity
Ability to create a Quote
Ability to validate & approve a Quote
Ability to manage a Quote
Ability to create & populate Order
Ability to submit Order
Ability to create Account & Account
hierarchy
Ability to manage Account
Ability to create Contact
Ability to promote Prospect to Contact
Ability to create & manage SRs
Service
&
manage
Message
Script ref
Sales 1.0 Sales Opportunity
SPA 1.0
SPA 1.0
SPA 1.0
SPA 1.0
Confidence Tests
Confidence Tests
CR_FT_CR422_TC002
List Management 3.0
Closed SR 1.0
Complaint SR 1.0
Full SR 1.0 E2E
Full SR 2.0 DC Refer
GSLA SR 1.0
GSLA SR 2.0
Hot Topics 1.0
PRT_FT_PRT01_TC001
PRT_FT_PRT02_TC002
PRT_FT_PRT02_TC004
PRT_FT_PRT04_TC001
Engineer Dispatch 1.0
FS 1.0
FS 2.0
Client Training 1.0
Client Training 2.0
Client Training 3.0
Client Training 4.0
Client Training 5.0
Confidence Tests
Email Outbound Forward
Email Reply
Confidence Tests
Functional Area
Marketing
Function
Ability to create & manage and execute
Campaigns
Script ref
Campaign Management 1.0
Campaign Management 2.0
Campaign Management 3.0
Campaign Management 4.0
Data Quality
Management
DQM 1.0
DQM 3.0
Confidence Tests
Product Literature 1.0
Misc
Enabling Services
Products &
Pricing
Reporting
Function
Ability to Create and & manage an
Opportunity
Order
create
Account
&
Account
Script ref
Sales 1.0 Sales Opportunity
Sales 2.0 Cancellation Opportunity
Sales 4.0 Opportunity TAS
Sales 5.0 New Business
SPA 1.0
SPA 2.0
SPA 1.0
SPA 2.0
FT_CR962_TC001
SPA 1.0
SPA 2.0
SPA 1.0
SPA 2.0
Confidence Tests
Functional Area
Function
Ability to manage Account
Service
Client Training
Ability to create & manage a Training Class
Script ref
FT_CR677_TC001
FT_CR677_TC002
Sales 3.0 Location Prospect (4.3 Env)
Confidence Tests
CR_FT_CR422_TC002
FT_CR422_TC001
FT_CR936_TC001
List Management 3.0
Assignment Rules 1.0
Closed SR 1.0
Complaint SR 1.0
CSS 1.0
Full SR 1.0 E2E
Full SR 2.0 DC Refer
Full SR 3.0 FS Refer
Full SR 4.0 FLO Change
Full SR 5.0 Entitlements
GSLA SR 1.0
GSLA SR 2.0
GSLA SR 3.0
GSLA SR 4.0
GSLA SR 5.0
HDB 1.0
Hot Topics 1.0
Hot Topics 2.0
Hot Topics 3.0
PRT_FT_PRT01_TC001
PRT_FT_PRT01_TC002
PRT_FT_PRT01_TC003
PRT_FT_PRT01_TC004
PRT_FT_PRT01_TC005
PRT_FT_PRT01_TC006
PRT_FT_PRT01_TC007
PRT_FT_PRT01_TC008
PRT_FT_PRT01_TC009
PRT_FT_PRT01_TC0010
PRT_FT_PRT02_TC002
PRT_FT_PRT02_TC003
PRT_FT_PRT02_TC004
PRT_FT_PRT02_TC005
PRT_FT_PRT02_TC005
PRT_FT_PRT03_TC001
PRT_FT_PRT04_TC001
PRT_FT_PRT04_TC002
PRT_FT_PRT04_TC003
Light SR 1.0
eChannel_FT_ES_44_TC001
Engineer Dispatch 1.0
Engineer Dispatch 2.0
Engineer Dispatch 3.0
FS 1.0
FS 2.0
FS 3.0
FS 4.0
Misc 4.3 Tests 1.0
Client Training 1.0
Client Training 2.0
Client Training 6.0
Client Training 7.0
Client Training 8.0
Client Training 9.0
Client Training 10.0
Client Training 11.0
Client Training 3.0
Client Training 4.0
Client Training 5.0
Client Training 12.0
Client Training 13.0
FT_CR722_TC001
Functional Area
Function
Ability to create & send emails
Communications
Marketing
Data Quality
Management
Enabling Services
Misc
Products &
Pricing
Script ref
CPM Email 1.0 - Unsolicited mail
CPM Email 2.0 - Activity Reply
CPM Email 3.0 - SR Reply
CRMC Support Email 1.0 - Unsolicited mail
CRMC Support Email 2.0 - Activity Reply
CRMC Support Email 3.0 - SR Reply
Data Helpdesk Email 1.0 - Unsolicited mail
Data Helpdesk Email 2.0 - Activity Reply
Data Helpdesk Email 3.0 - SR Reply
Email Outbound Forward
Email Reply
eSupport Email 1.0 - Unsolicited mail from
unregistered account
eSupport Email 2.0 - Unsolicited mail from
registered account
eSupport Email 2.1 - Unsolicited mail from
registered account
eSupport Email 2.2 - Unsolicited mail from
registered account
eSupport Email 2.3 - Unsolicited mail from Active
eSpresso User
eSupport Email 3.0 - SR Reply
eSupport Email 4.0 - Activity reply
Resolver Group Email 1.0 - Unsolicited mail
Resolver Group Email 2.0 - Activity Reply
Resolver Group Email 3.0 - SR Reply
Misc 4.3 Tests 5.0
Campaign Management 1.0
Campaign Management 2.0
Campaign Management 3.0
Campaign Management 4.0
CT Products 1.0
FT_CR896_TC001
DQM 1.0
DQM 2.0
DQM 3.0
DQM 4.0
DQM 5.0
Correspondence 1.0
Product Literature 1.0
List Management 1.0
List Management 4.0
List Management 5.0
Audit Trail 1.0
Calendar 1.0
Business Direct 1.0 Basic Tests
Export Functionality 1.0
FT_CR717_TC001
FT_CR811_TC001
FT_CR926_TC001
Misc 4.3 Tests 4.0
Homepage 1.0
Misc 4.3 Tests 2.0
Misc 4.3 Tests 3.0
FT_CR963_TC001
FT_CR983_TC001
Global Price Lists 1.0
Pricing Administration 1.0
Pricing Administration 2.0
Product Admin 1.0
Actuate 1.0
Actuate 2.0
Actuate 3.0
Analytics Scenarios 1.0
Analytics Scenarios 2.0
Analytics Scenarios 3.0
Functional Area
Function
Script ref
Analytics Scenarios 4.0
Analytics Scenarios 5.0
Analytics Scenarios 6.0
Function
Script ref
RQ Interface 1.0
Reuters Q Interface
Customer Zone
3000 Xtra
RQ Interface 2.0
PRM Portal
Factiva
ECCO Interface
ECCO 1.0
RISC 1.0
RISC 2.0
RISC 3.0
CSS & Sales Portal 1.0
RISC
CSS / Sales Portal
Skipper
TPASS 1.0
TPASS
Venus
Venus 1.0
Espresso Interface
Oracle Projects
SAI
Skipper 1.0
Functional Area
Function
Script ref
eChannel_BC01_TC001
eChannel_BC02_TC001
eChannel_BC03_TC004
eChannel_BC07_TC001
eChannel_FT_BC_05_TC001
eChannel_FT_ES_41_TC001
eChannel_FT_ES_65_TC001
eService_ES24_TC001
eService_ES53_TC001
eService_FT_ES_8_TC001-TC003
eService_FT_ES_8_TC004
eService_FT_ES_20,55,56_TC001
eService_FT_ES_25_TC001
eService_FT_ES_53_TC002
eService
Performance Testing
The 21 KPIs to be tested during SIT are detailed in the appendix.
Criteria description
Owner
Regression Test Plan has been completed, approved and distributed to the
Solution Delivery Test Manager.
Operations team
Development Manager
Exit Criteria
The table below describes the regression component of the exit criteria for SIT.
No
Criteria description
Owner
All Regression defects are raised in Test Director and meet defect exit
criteria.
Criteria description
Owner
Resumption
No
Criteria description
Owner
Evidence that environment build is valid and confidence tests are passed
successfully
Test Plan
The detailed test execution sequence is not yet finalised.
Risk
Likelihood
Impact
Mitigation
???
High
Medium
Medium
Medium
Medium
???
Appendix
CRM 4.3 Test Scripts
Module
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eService
eService
eService
eService
eService
eService
eService
eService
eService
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Priority
(High/Medium/
Low)
High
High
High
Low
Medium
High
Medium
Medium
Medium
Low
Medium
High
Medium
High
Medium
Medium
Medium
Medium
High
Included in
regression
pack Y/N
Y
Y
Y
N
Y
Y
Y
Y
N
N
Y
Y
Y
Y
Y
Y
Y
Y
N
CRM4.3_FT_WT_UC01_TC001-TC006
High
CRM4.3_FT_WT_UC01_TC008
Low
CRM4.3_FT_WT_UC02_TC001-TC005
High
Medium
CRM4.3_FT_WT_UC03_TC001-TC005
High
CRM4.3_FT_WT_UC03_TC007
Low
CRM4.3_FT_WT_UC04_TC001-TC003
Medium
CRM4.3_FT_WT_UC05_TC001-TC004
Medium
CRM4.3_FT_WT_UC06_TC001-TC004
Medium
CRM4.3_FT_WT_UC07_TC001
Medium
CRM4.3_FT_WT_UC08_TC001
Medium
CRM4.3_FT_WT_UC09_TC001
Medium
High
CRM4.3_FT_WT_UC21_TC006
Medium
CRM4.3_FT_WT_UC21_TC007
High
CRM4.3_FT_WT_UC21_TC008
CRM4.3_FT_WT_UC21_TC009
Medium
Medium
N
N
CRM4.3_FT_WT_UC02_TC008
CRM4.3_FT_WT_UC21_TC001-TC004
Comment
Low priority
Low priority
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CRM4.3_FT_WT_UC22_TC001-TC004
High
Medium
High
CRM4.3_FT_WT_UC24_TC001
Medium
CRM4.3_FT_WT_UC25_TC001
Medium
CRM4.3_FT_WT_UC29_TC001-TC002
CRM4.3_FT_PRT01_TC001
CRM4.3_FT_PRT01_TC002
CRM4.3_FT_PRT01_TC003
CRM4.3_FT_PRT01_TC004
CRM4.3_FT_PRT01_TC005
CRM4.3_FT_PRT01_TC006
CRM4.3_FT_PRT01_TC007
CRM4.3_FT_PRT01_TC008
CRM4.3_FT_PRT01_TC009
CRM4.3_FT_PRT01_TC010
CRM4.3_FT_PRT02_TC002
CRM4.3_FT_PRT02_TC003
CRM4.3_FT_PRT02_TC004
CRM4.3_FT_PRT02_TC005
CRM4.3_FT_PRT03_TC001
CRM4.3_FT_PRT04_TC001
CRM4.3_FT_PRT04_TC002
CRM4.3_FT_PRT04_TC003
CRM4.3_FT_IC030_TC001
CRM4.3_FT_IC030_TC002
CRM4.3_FT_IC030_TC003
CRM4.3_FT_IC034_TC001
CRM4.3_FT_IC034_TC002-TC004
CRM4.3_FT_IC037_TC001
CRM4.3_FT_IC038_TC001
CRM4.3_FT_IC085_TC001
CRM4.3_FT_IC085_TC002
CRM4.3_FT_IC087_TC001
CRM4.3_FT_IC088_TC001
CRM4.3_FT_IC089_TC001
CRM4.3_FT_IC089_TC002
CRM4.3_FT_IC091_TC001
CRM4.3_FT_IC35_TC001
CRM4.3_FT_ISQ 1129067_TC001
CRM4.3_FT_CR983_TC001
CRM4.3_FT_CR963_TC001
CRM4.3_FT_CR962_TC001
CRM4.3_FT_CR961_TC001
CRM4.3_FT_CR960_TC001
CRM4.3_FT_CR936_TC001
CRM4.3_FT_CR926_TC001
CRM4.3_FT_CR903_TC001
CRM4.3_FT_CR902a_TC001
CRM4.3_FT_CR896_TC001
High
High
Medium
Medium
Medium
Medium
Medium
Medium
Medium
Medium
Medium
High
Medium
High
Medium
Medium
High
Medium
Medium
High
Medium
Medium
High
Medium
High
High
Medium
Medium
Medium
Medium
High
Medium
Medium
Medium
Medium
Medium
Medium
Medium
Low
Low
Medium
Medium
Medium
Medium
Medium
N
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
Y
Y
Y
N
N
Y
Y
N
N
Y
CRM4.3_FT_WT_UC22_TC005
CRM4.3_FT_WT_UC23_TC001-TC004
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Low priority
Low priority
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CRM4.3_FT_CR829_TC001
CRM4.3_FT_CR811_TC003
CRM4.3_FT_CR811_TC002
CRM4.3_FT_CR811_TC001
CRM4.3_FT_CR722_TC001
CRM4.3_FT_CR717_TC001
CRM4.3_FT_CR677_TC002
CRM4.3_FT_CR677_TC001
CRM4.3_FT_CR519_TC001
CRM4.3_FT_CR422_TC002
CRM4.3_FT_CR422_TC001
CRM4.3_FT_CR1172_TC001
CRM4.3_FT_CR1113_TC001
CRM4.3_FT_ CR172_TC001
Medium
Low
Low
Medium
Medium
Medium
Medium
Medium
Medium
High
Medium
High
High
Medium
N
N
N
Y
Y
Y
Y
Y
N
Y
Y
N
N
N
Low priority
Low priority
Description
A&N
CTI
Customer
Zone
ECCO
(IBM)
Espresso
Interface
point
with CRM 4.4
Through Siebel
Direction
of
communication
Type
of
passed
information
Through Siebel
Through Siebel
One way,
Siebel
Through
via MQ
One
eSpresso
Siebel
Siebel,
Through Siebel /
data migration
CZ
way,
Factiva
Oracle
Projects
(SPA
Orders)
Launches
Siebel
RQ
SAI
(Compass /
ISIS)
Siebel
Factiva
RISC
from
Reuters Integrated Service Console web portal from which CSRs can run
technical tasks remotely against
specific assets on client sites.
Launched from Siebel, the interface
shows
event,
hierarchy
and
diagnostic tools in a location / server
specific manner, ensuring that
relationships between delivery chain
devices are accurately reflected.
Siebel,
Oracle
Projects
Siebel
Siebel, via MQ
Siebel
One
way
COMPASS
/
ISIS Siebel
Account Teams.
The interface
compares previously extracted data
with the current data and only sends
transactions for data that has been
modified.
Skipper
TPASS
Venus
Siebel
Siebel
Status
Functional Area
Comments / Actions
Performance KPIs
CSS
Number
Screen
Action
Action:
Contact
Contact
Parameters:
Action:
Drill down
surname
View:
SR
Activity
on
Contact Service
List Applet
Action:
View:
Service
View
Applet:
Action:
View:
SR
Applet:
Target 1
Target 2
30
10
10
10
10
contact
Applet:
Request
Measure
Request
Detail
Action:
Sales
Search Centre Accounts Location
6
Accounts
Contacts
Parameters:
City
Account Name
View:
Applet:
Parameters:
View:
Applet:
8
Activities
Parameters:
10
10
10
Time
for
message to pop
up
30
50
10
20
10
20
View:
All Opportunities
Organizations
Applet:
RCRM Opportunity
Applet - Primary
Opportunities
Across
List
Parameters:
View:
Action:
View:
RCRM
View
Solutions
10
11
12
13
Order
Quote
Approval
Approvals
Quote
Action:
View:
Applet:
Parameters:
Quote Number
View:
Action:
Quote
Client Training
View:
14
15
Marketing
Contacts
Applet:
Action:
View:
Activity
Applet:
Action:
Copy an activity
View:
Campaign
List
20
10
20
N/A
10
Time for SR
list to reappear
after save
N/A
10
Applet:
16
Campaign
Action:
Administration
Campaign
Administration
List Applet
On list applet, query for
Campaigns in UK
Parameters:
View:
Applet:
Action
N/A
N/A
Query for SR #
Parameters:
SR Number
eService
17
SR
View:
18
SR
Applet:
Action:
Save new SR
All
19
Login
20
50
20
Home Page
10
60
21
Time between
send
and
Email
Outbound
being set to
status
Queued
10
Purpose
This document provides you some of the Best Practices to be followed for developing a Functional Test
Script.
Below are some of the best practices/guidelines that can be implemented to make a Test script more
valuable:
Verification routines. Include routines that verify that the actions requested
were performed, expected values were displayed, and known states were reached.
Make sure that these routines have appropriate comments and log tracing.
Also include routines that perform boundary testing in fields that should accept only a
specific range of values. Test values above, below, and within the specified limits. For
example, attempt to insert and commit the values of 0, 11, and 99 in a field that
should accept values only between 1 and 31. Boundary tests also include field length
testing. For example a field that accepts 10 characters should be tested with string
lengths that are greater than, less than, and equal to 10.
needed to be renamed, the initial query should be repeated after each record is renamed, until the row count
is 0.
Test Automation support for the following new features in the Siebel
8.0 UI
Task Based UI
InkData Control
New API called Siebel Test Optimizer (formerly known as Siebel Test
Express)
Task Based UI
SiebTask
SiebTaskStep
SiebTaskUIPane
SiebTaskLink
SiebTask
Control Specification
SiebTask
Parent: SiebApplication
SiebTaskStep
Control Specification
SiebTaskStep
Parent: SiebTask
SiebTaskUIPane
Control Specification
SiebTaskUIPane
Parent: SiebApplication
SiebTaskLink
Control Specification
SiebTaskLink
Parent: SiebTaskUIPane
InkData Control
SiebInkData
SiebInkData
Control Specification
SiebInkData
Parent: SiebApplet
QTP provides the UI features that interact with the Siebelprovided API
It is supported on the following J2EE Application Servers WebSphere 6.0 , WebLogic 9.0 and JBoss 4.0.2
Fill out the connection and login parameters on the first page of the
wizard and press Next.
Once a QTP OR has been created and saved, you can start authoring test
scripts
Siebel Functional Testing module enables automated testing of Siebel 7.7, 7.8 and
8.x applications
Records and plays back browser actions in either Internet Explorer 6.x, 7.x
& 8.x
Firefox not supported for Siebel
Click FileNew
Open the .CFG file for the Siebel application you are testing on the Siebel server
Set the EnableAutomation and AllowAnonUsers switches to TRUE in the [SWE]
section as follows:
[SWE]
EnableAutomation = TRUE
AllowAnonUsers = TRUE
Restart the Siebel Server
If Siebel Test Automation is not enabled properly on the Siebel server side, it will
prevent OpenScript from capturing Siebel HI actions
When you navigate to Siebel with the ?SWECmd=AutoOn URL for the first
time (even just in IE), you should be prompted to download Siebel Test
Automation plugin to your browser like this:
http://siebel/callcenter_enu/start.swe?SWECmd=AutoOn
Then when you look into the Downloaded Program Files in IE you will see that Siebel
Test Automation got downloaded to your browser like this:
Then when you record scripts that load and click on Siebel HI controls, you will see a
SiebelFTInit command and SiebelFT . actions (instead of just Web actions) in your
scripts like this:
If you do not see this, then check whether Siebel Test Automation is properly enabled on
the Siebel server
Change Internet Explorer cache settings to check for new versions Every time I
visit the webpage
Tools, Internet Options, General, Browsing History, Settings
Login to Siebel at least once on the machine before recording to load all ActiveX controls
Siebel Buttons
Siebel Picklists
Confirmation Dialogs
Siebel Menus
Search Button
Not all actions are recorded; some are only intended to be manually inserted
Parameterization
Use the Properties dialog in tree view to edit parameter values
Substitute variable
Connect to Databank
Object Identification
Objects are identified using a Siebel Test Automation CAS XPath
Object Test
Used to verify an objects properties
Table Test
Stored in recordedData
Siebel Test Automation Load Correlation Library is a Win32 DLL that integrates
with OpenScript
Plugs into a Siebel application and gets additional information used for
correlating the script
Validation
Validation cannot be done via Text-Matching Test through the UI for Siebel HI
controls
Siebel Correlation library also adds extra Siebel error checking to watch for
common Siebel error conditions
Parameterization
Siebel scripts use standard OpenScript parameterization / Data Banking