User's Manual

Download as pdf or txt
Download as pdf or txt
You are on page 1of 110

User’s Manual

V1.2

XQual
Cidex 436, chemin des tourres . 6330 Roquefort les pins .France
Table of content
1. OBJECTIVES................................
................................................................
................................................................
................................................................
...................................... 6

2. GENERAL OVERVIEW ................................................................


........................................................................................
........................................................ 6

3. BASICS................................
BASICS................................
................................................................
................................................................
............................................... 7
...............................................
3.1 USER TREE................................
................................................................
................................................................
........................................................ 9
........................................................
3.1.1 CREATE A COMPANY ................................................................
................................................................................................
................................
....................................... 9
3.1.2 CREATE A HIERARCHY IN THE COMPANY ................................................................
.........................................................................
................................ 10
3.1.3 CREATE A USER ................................
................................................................
................................
................................................................
................................
........................................... 10
3.1.4 SUBMIT A NEW ABSENCE ................................................................
...............................................................................................
............................................................... 10
3.1.5 EDIT AN ABSENCE ................................................................
................................................................................................
.........................................................................
................................ 11
3.1.6 CHECK THE CALENDARS ................................................................
................................................................................................
................................ 12
3.1.7 CHANGING ADMIN PASSWO PASSWORD ................................................................
........................................................................................
........................................................ 13
3.2 SUT TREE ................................
................................................................
................................................................
....................................................... 14
.......................................................
3.2.1 CREATE A SUT ................................
................................................................
................................
................................................................
............................................
............ 14
3.2.2 CREATE A SUT INHERITING REQUIREMEN REQUIREMENTS TS FROM ANOTHER SUT ...................................
................................ 15
3.3 AGENT TREE ................................................................
................................................................................................
...................................................................................
................... 16
3.3.1 ADD MY LOCAL HOST ................................................................
................................................................................................
................................ 16
....................................
3.3.2 ADD ALL NECESSARY HOSTS HOSTS................................................................
..........................................................................................
.......................................................... 17
3.4 REQUIREMENT TREE ................................................................
................................................................................................
................................
........................................ 18
3.4.1 CREATE A CATEGORY ................................................................
................................................................................................
................................ 18
...................................
3.4.2 CREATE A REQUIREMENT ................................................................
..............................................................................................
.............................................................. 19
3.4.3 EDIT A REQUIREMENT ................................................................
................................................................................................
................................ 20
...................................
3.5 SPECIFICATION TREE................................................................
................................................................................................
................................
........................................ 21
3.5.1 CREATE A SPECIFICATION
SPECIFICATIO ................................
................................................................
............................................................. 21
.............................................................
3.5.2 EDIT A SPECIFICATION................................................................
................................................................................................
................................ 22
...................................
3.6 PROJECT TREE ................................
................................................................
................................
................................................................
................................................
................ 23
3.6.1 CREATE A PROJECT ................................
................................................................
................................................................
................................
...................................... 24
3.6.2 CREATE A TASK ................................
................................................................
................................
................................................................
............................................
............ 24
3.6.3 CREATE A SPRINT ................................................................
................................................................................................
.........................................................................
................................ 24
3.6.4 EDIT A SPRINT ................................................................
................................................................................................
..............................................................................
.............. 25
3.6.5 ALLOCATE SOME RESOURC RESOURCES ES TO A SPRINT ................................................................
.....................................................................
................................ 25
3.6.6 ASSOCIATE
CIATE SOME TASKS TO A SPRINT ................................
................................................................
............................................
............ 26
3.6.7 DAILY UPDATE PROGRESS OF THE TASKS OF A SPRINT SP ..................................................... 27
.....................................................
3.6.8 SEE THE VELOCITY CHARTS CHARTS/CHECK THE STATUS OF THE SPRINT .......................................
................................ 27
3.7 TEST TREE ................................
................................................................
................................................................
...................................................... 29
......................................................
3.7.1 CREATE A TEST ................................
................................................................
................................
................................................................
............................................
............ 30
3.7.2 ASSOCIATE A TEST TO SPECIFICATIONS
SPECIFICATIONS ................................................................
..........................................................................
................................ 30
3.7.3 CREATE A TESTCASE ................................................................
................................................................................................
................................ 30
....................................
3.7.4 MOFIFY THE TESTPLAN OF O A TESTCASE ................................................................
..........................................................................
................................ 31
3.8 CAMPAIGN TREE ................................................................
................................................................................................
..............................................................................
.............. 32
3.8.1 CREATE A CAMPAIGN ................................................................
................................................................................................
................................ 33
....................................
3.8.2 ORDER THE TESTS IN THE THE CAMPAIGN ................................
................................................................
.............................................
............. 33
3.8.3 CREATE A CAMPAIGN SESSION SESSION ................................................................
.......................................................................................
....................................................... 34
3.8.4 RUN A CAMPAIGN SESSION SESSION ................................................................
............................................................................................
............................................................ 35
3.8.5 SEE THE EXECUTION DETAILS DETAILS ................................................................
........................................................................................
........................................................ 36
3.8.6 GET THE RESULTS ................................................................
................................................................................................
................................
........................................ 37
3.8.7 CHECK THE PROGRESSION OF A CAMPAIGN................................................................
.....................................................................
................................ 37
3.9 DEFECT TREE ................................
................................................................
................................
................................................................
..................................................
.................. 39
3.9.1 CREATE A DEFECT ................................................................
................................................................................................
................................
........................................ 40
3.9.2 EDIT A DEFECT ................................
................................................................
................................
................................................................
.............................................
............. 40
3.9.3 LINK A FAILED TEST TO A DE
DEFECT ................................................................
...................................................................................
................... 41
4. GETTING MORE INTO TH
THEE DETAILS ................................
..............................................................
.............................. 42
4.1 LOCALIZATION ................................
................................................................
................................
................................................................
.................................................
................. 42
4.2 INTERNATIONALIZATION ................................................................
................................................................................................
................................ 43
....................................
4.3 CHANGE TRACKING ................................................................
................................................................................................
..........................................................................
................................ 44
4.4 TESTING COVERAGE/TRACEABILITY MATRIX ................................
................................................................
................................
...................................... 45
4.4.1 GLOBAL REQUIREMENTS TRACEABILITY TRACEABILITY MATRIX ...............................................................
............................................................... 45
4.4.2 GLOBAL SPECIFICATIONS TRACEABILITY MATRIX ..............................................................
.............................................................. 45
4.4.3 DETAILED REQUIREMENTS COVERAGE ................................
................................................................
............................................
............ 46
4.4.4 DETAILED SPECIFICATIONS
SPECIFICATIONS COVERAGE ................................
................................................................
................................
........................................... 46
4.4.5 DETAILED TESTS COVERAGE
COVERAGE................................................................
..........................................................................................
.......................................................... 47
4.4.6 DETAILED CAMPAIGN COVERAGE COVERAGE................................
................................................................
....................................................
.................... 48
4.5 SCHEDULING TEST EXECUTION
EXECUTION ................................................................
..........................................................................................
.......................................................... 49
4.6 TRACKING THE TEST IMPLEMENTATION ................................................................
..............................................................................
.............. 50
4.7 DEFECTS REPORTING ................................................................
................................................................................................
................................
....................................... 51
4.7.1 PER-USER REPORTS ................................................................
................................................................................................
................................
..................................... 52
4.7.2 PER-SEVERITY REPORTS ................................................................
...............................................................................................
............................................................... 52
4.7.3 PER-PRIORITY REPORTS ................................................................
...............................................................................................
............................................................... 53
4.7.4 SUBMISSION/RESOLUTION RATES ................................
................................................................
..................................................
.................. 54
4.8 ATTACHMENTS................................
................................................................
................................
................................................................
.................................................
................. 55
4.8.1 ADD AN ATTACHMENT ................................................................
................................................................................................
................................ 56
...................................
4.8.2 DOWNLOAD/OPEN AN ATTACHMENT ................................
................................................................
...............................................
............... 56
4.9 GENERATE DOCUMENTATION
DOCUMENTATIO ................................................................
............................................................................................
............................................................ 56
4.9.1 REQUIREMENTS BOOKS ................................................................
................................................................................................
................................ 56
4.9.2 SPECIFICATIONS BOOKS ................................................................
................................................................................................
................................ 57
4.9.3 PROJECTS BOOKS ................................................................
................................................................................................
................................
........................................ 58
4.9.4 TESTPLANS ................................
................................................................
................................
................................................................
..................................................
.................. 58
4.9.5 TEST REPORTS ................................
................................................................
................................
................................................................
............................................
............ 59
4.10 CUSTOMIZING THE DOCUMENTS
DOCUMENTS ................................................................
.........................................................................................
......................................................... 60
4.10.1 CHANGING THE LOGO ................................
................................................................
................................................................
................................ 60
..................................
4.10.2 CUSTOMIZING THE REPOR REPORTS ................................................................
........................................................................................
........................................................ 61
4.11 SEARCHING ................................
................................................................
................................................................
.....................................................
..................................................... 62
4.11.1 BY NAME ................................
................................................................
................................
................................................................
....................................................
.................... 62
4.11.2 BY ID ................................
................................................................
................................................................
.........................................................
......................................................... 63
4.11.3 ADVANCED AND PLAIN TEXT TEXT SEARCH ................................
................................................................
............................................
............ 63
4.11.3.1 Requirements................................................................
................................................................................................
...................................................
................... 63
4.11.3.2 Specifications ................................................................
................................................................................................
...................................................
................... 64
4.11.3.3 Tasks ................................
................................................................
................................................................
................................................................ 64
................................................................
4.11.3.4 Tests ................................
................................................................
................................................................
................................................................ 65
................................................................
4.11.3.5 Defects ................................................................
................................................................................................
.............................................................
............................................................. 65
4.12 IMPORTING DATA ................................................................
................................................................................................
..............................................................................
.............. 66
4.12.1 FROM CSV ................................
................................................................
................................
................................................................
................................................
................ 66
4.12.1.1 Test and testcases ................................................................
................................................................................................
................................
........................................... 66
4.12.2 FROM XML ................................
................................................................
................................
................................................................
................................................
................ 68
4.12.2.1 Tests and testcases ................................................................
................................................................................................
................................
......................................... 68
4.12.2.2 Requirements ................................................................
................................................................................................
...................................................
................... 68
4.12.2.3 Specifications ................................................................
................................................................................................
...................................................
................... 69

5. THE EXPERT’S CORNER ................................................................


.................................................................................
................................................. 70
5.1 TESTS ATTRIBUTES ................................................................
................................................................................................
..........................................................................
................................ 70
5.1.1 ASSOCIATE SOME ATTRIBUTES
ATTRIBUTES TO A TEST ................................
................................................................
................................
....................................... 70
5.1.2 CREATE AND ASSOCIATE A NEW ATTRIBUTE TO A TEST .....................................................
..................................................... 71
5.1.3 EDIT AN ATTRIBUTE ................................
................................................................
................................................................
................................
...................................... 71
5.1.4 CREATE A CAMPAIGN BASED
BASED ON ATTRIBUTE FILT FILTER..........................................................
.......................................................... 72
5.2 DEPENDENCIES BETWEEN TESTS ................................................................
.......................................................................................
....................................................... 73
5.3 TEST EXECUTION HISTORY
HISTOR ................................................................
................................................................................................
................................ 74
5.4 DURATION OF TEST EXECUTION
EXECUTION ................................................................
.........................................................................................
......................................................... 75
5.4.1 ESTIMATED DURA TION OF A CAMPAIGN ................................
DURATION ................................................................
................................
........................................... 75
5.4.2 AVERAGE DURATION OF A TEST ................................................................
.....................................................................................
..................................................... 76
6. LAUNCHERS ................................
................................................................
................................................................
................................................................
.................................... 77
6.1 DEFAULT LAUNCHERS ................................
................................................................
................................................................
................................
...................................... 77
6.1.1 MANUAL LAUNCHER (MANUAL.JAR)................................
................................................................
.................................................
................. 78
6.1.1.1 Configuration ................................
................................................................
................................
................................................................
......................................................
...................... 78
6.1.1.2 Control Bar ................................................................
................................................................
................................................................
.........................................................
......................... 78
6.1.1.3 Timeout per test ................................
................................................................
................................................................
.................................................
................. 78
6.1.1.4 Tutorial: Creating and executing manual tests ................................
................................................................
.................................. 80
6.1.1.4.1 Declare the agent and the SUT ................................
................................................................
................................................................
...................................80
6.1.1.4.2 Create a dedicated category for manual tests ................................................................
.............................................................................
................................ 80
6.1.1.4.3 Create a test ................................
................................................................
................................
................................................................
................................................................
................................81
6.1.1.4.4 Run an individual test ................................................................
................................................................................................
..................................................
..................83
6.1.1.4.5 Creating a test campaign ................................................................
................................................................................................
................................
............................................. 85
6.1.1.4.6 Run a campaign session ................................................................
................................................................................................
................................
............................................. 86
6.1.2 SIMPLE MANUA
MANUALL LAUNCHER (SIMPLE_MANUAL.JAR) .........................................................
......................................................... 87
6.1.3 AUTOIT LAUNCHER (AUTOIT.JAR) ................................................................
...................................................................................................... 87
6.1.3.1 Configuration ................................
................................................................
................................
................................................................
......................................................
...................... 87
6.1.3.2 Requirements ................................
................................................................
................................
................................................................
.....................................................
..................... 87
6.1.3.3 Tutorial: Creating and executing AutoIt tests ................................................................
.................................................................... 88
6.1.3.3.1 Prerequisites ................................
................................................................
................................
................................................................
................................................................
................................88
6.1.3.3.2 Create a dedicated category for AutoIt tests and create a test ...................................................
...................................................88
6.1.3.3.3 Creating a test campaign ................................................................
................................................................................................
................................
............................................. 88
6.1.3.3.4 Run a campaign session ................................................................
................................................................................................
................................
............................................. 89
6.1.4 PERL LAUNCHER (PERL.JAR) ................................................................
.........................................................................................
......................................................... 90
6.1.4.1 Configuration ................................
................................................................
................................
................................................................
......................................................
...................... 90
6.1.4.2 Requirements ................................
................................................................
................................
................................................................
.....................................................
..................... 90
6.1.5 TCL LAUNCHER (TCL.JAR) ................................................................
............................................................................................
............................................................ 91
6.1.5.1 Configuration ................................
................................................................
................................
................................................................
......................................................
...................... 91
6.1.5.2 Requirements ................................
................................................................
................................
................................................................
.....................................................
..................... 91
6.1.6 AUTOMATEDQA®
QA TESTCOMPLETE V7 LAUNCHER (TESTCOMPLETE7.JAR) ........................ 92
6.1.6.1 Configuration ................................
................................................................
................................
................................................................
......................................................
...................... 92
6.1.6.2 Requirements ................................
................................................................
................................
................................................................
.....................................................
..................... 92
6.1.7 COMPUWARE® TESTPARTNER LAUNCHER (TESTPARTNER.JAR) .......................................
................................ 93
6.1.7.1 Configuration ................................
................................................................
................................
................................................................
......................................................
...................... 93
6.1.7.2 Requirements ................................
................................................................
................................
................................................................
.....................................................
..................... 93
6.1.8 MICROSOFT® VISUALSTUDIO LAUNCHER (VISUALSTUDIO.JAR) .........................................
................................ 94
6.1.8.1 Configuration ................................
................................................................
................................
................................................................
......................................................
...................... 94
6.1.8.2 Requirements ................................
................................................................
................................
................................................................
.....................................................
..................... 94
6.1.9 BEANSHELL LAUNCHER (BEANSHELL.JAR)................................
................................................................
................................
....................................... 95
6.1.9.1 Configuration
nfiguration ................................
................................................................
................................
................................................................
......................................................
...................... 95
6.1.9.2 Requirements ................................
................................................................
................................
................................................................
.....................................................
..................... 95
6.1.10 FROGLOGIC® SQUISH LAUNCHER (SQUISH.JAR)................................
............................................................
............................ 97
6.1.10.1 Configuration ................................................................
................................................................
................................................................
....................................................
.................... 97
6.1.10.2 Requirements ................................................................
................................................................................................
...................................................
................... 98
6.1.10.3 Tutorial: Creating and executing Squish tests................................................................
................................................................. 98
6.1.10.3.1 Prerequisites ................................
................................................................
................................
................................................................
..............................................................
..............................98
6.1.10.3.2 Create a dedicated category for Squish tests and create two tests ..........................................
................................ 99
6.1.10.3.3 Creating a test campaign ................................................................
................................................................................................
................................
...........................................99
6.1.10.3.4 Run a campaign session ................................................................
................................................................................................
................................
.........................................100
6.1.11 JUNIT V3 LAUNCHER (JUNIT3.JAR) ................................
................................................................
.............................................
............. 101
6.1.12 JUNIT V4 LAUNCHER (JUNIT4.JAR) ................................
................................................................
.............................................
............. 101
6.1.12.1 Configuration ................................................................
Confi ................................................................
................................................................
..................................................
.................. 101
6.1.12.2 Requirements ................................................................
................................................................................................
.................................................
................. 101
6.1.12.3 Tutorial: Creating and executing JUnit tests ................................................................
................................................................. 101
6.1.13 PYUNIT LAUNCHER (PYUNIT.JAR) ................................
................................................................
...............................................
............... 102
6.1.13.1 Configuration ................................................................
................................................................
................................................................
..................................................
.................. 102
6.1.13.2 Requirements ................................................................
................................................................................................
.................................................
................. 102
6.1.13.3 Tutorial: Creating and executing PyUnit tests ...............................................................
............................................................... 102
6.1.14 NUNIT LAUNCHER (NUNIT.JAR) ................................
................................................................
..................................................
.................. 103
6.1.14.1 Configuration ................................................................
................................................................
................................................................
..................................................
.................. 103
6.1.14.2 Requirements ................................................................
................................................................................................
.................................................
................. 103
6.1.14.3 Tutorial: Creating and executing NUnit tests ................................................................
................................................................. 103
6.1.15 TESTNG LAUNCHER (TESTNG.JAR) ................................................................
............................................................................
............ 104
6.1.15.1 Configuration ................................................................
................................................................
................................................................
..................................................
.................. 104
6.1.15.2 Requirements ................................................................
................................................................................................
.................................................
................. 104
6.1.15.3 Tutorial: Creating anand d executing TestNG tests .............................................................
............................................................. 104
6.1.16 JALIAN® MARATHON LAUNCHER (MARATHON.JAR) ................................
......................................................
...................... 105
6.1.16.1 Configuration ................................................................
................................................................
................................................................
..................................................
.................. 105
6.1.16.2 Requirements ................................................................
................................................................................................
.................................................
................. 105
6.1.16.3 Tutorial: Creating anand d executing Marathon tests ..........................................................
.......................................................... 105
6.1.17 EXECUTABLE LAUNCHER (EXE.JAR) ................................................................
............................................................................
............ 106
6.1.17.1 Configuration ................................................................
................................................................
................................................................
..................................................
.................. 106
6.1.17.2 Requirements ................................................................
................................................................................................
.................................................
................. 106
6.1.18 SIMULATION RANDOM LAUNCHER
AUNC (RANDOM.JAR) ........................................................
........................................................ 107
6.1.19 SIMULATION SUCCESS LAUNCHER (SUCCESS.JAR) ................................
......................................................
...................... 107
6.2 CUSTOM LAUNCHERS ................................................................
................................................................................................
................................
..................................... 107
7. MEMORY PROFILING ................................................................
....................................................................................
.................................................... 109
7.1 BROWSING TREES ................................
................................................................
................................
................................................................
..........................................
.......... 109
7.2 RUNNING MANUAL TESTS ON XAGENT ................................
................................................................
.............................................
............. 109
1. Objectives
This document will introduce users with X
XStudio’s
Studio’s Data Model as well as provide the basic
information
ion on how to use XStudio.
XS

2. General overview
Access to XStudio
XStudio is restricted to users who have suitable credentials. A login process is used to
authenticate the user before he can use the system. Each user is then grant granted
ed with a list of
permissions.

By default, an admin account is created (at installation time). The admin has (among some others)
the permission to create users. If you do not know your credentials, contact your XQual
Administrator.

XQual Studio manages


manages all entities implied in the process of testing. This includes:
• Users,
• SUT (Systems Under Test),
• Agents
• Requirements,
• Specifications,
• Tests,
• Test campaigns,
• Defects.

Each of these entities can be smoothly and conveniently organized in separate trees.
tree
3. Basics
Here are the main entities managed in XStudio’s Data Model:

Entity Function
Company Several companies will be involved in the testing process:
• The company who deliver the product to test
• The company who is in charge of writing the testplan
• The company who is in charge of implementing and executing the tests
Of course, all these companies can be the same.
User Each company have users that will be involved in the testing process as:
• Author of the testplan
• Test developer
• Test operator
• Task performer
perf
SUT (System Under
nder Test)
What we want to test. This can be a software or a Hardware target.
Agent Tests can be run all locally or on any host having XAgent installed and running.
All hosts with XStudio or XAgent MUST be referenced in the Agent tree.
Requirement eatures required for the SUT.
Features SUT

Specification Deduced from the requirements, the specifications precisely detail each function
of the SUT.
Project A generic project

Task A project is made of tasks that will be spread in differe


different
nt sprints.

Sprint Some tasks are associated to a sprint. An intermediate deliverable will come out
of each sprint..
Category Generally, there will be different category of tests for one single product. A
Category is characterized by a unique way to rrunun all the tests
test under this category.
Test Tests are developed based on the specifications. Each test must verify one
particular item of the specifications. Tests can includes different testcases.

Campaign A campaign is a selection of tests. A campa


campaign
ign can be executed several time on
different version of the product: these executions are called Campaign Session.
Session
Campaign Session A campaign session is an execution instance of a campaign. This includes results
from the execution associated with specific
specific configuration of execution.
Defect Campaign session will highlight presence of defect in the SUT.. Several failures in
a Campaign Session may be due to the same defect. Analyzing of the test report
allows to create this defects.

Table 1 - Entities

As you will see, XStudio is using a lot trees. Trees are flexible and allow managing entities a very
flexible way. Each tree will be associate with a few icons:

Tree Button Action


Refresh
Expand all
Expand all except for leaves
Collapse all
Show closed items (available for sprints
sprints, tasks and defects)
Display the previous/next page

Table 2 - Tree Buttons


All these icons will be used along all XStudio’s GUI. They may be associated wi
with
th additional
overlay icon which meaning is the following:

Overlay Represents
Create
Delete
Move
Copy
Edit
View
Initialized
Playing
Paused
Stopped
Open
Add
Remove
Search
Select
Special item (read-only)
(read
Move to the right
Move to the left
Move up
Move down
Show items (available for closable items such as sprints, tasks or defects)

Table 3 - Overlay Icons

XStudio’s
’s GUI is based on a simple and clean design:
• A left panel including a tree
• A right panel:
o showing information concerning the element selected in the tree
o proposing a toolbar allowing the user to execute some specific actions
action on the
element selected in the tree.
3.1 User Tree
Here is a typical use
userr tree. It immediately shows a number of useful information:
• the total number of users
• the number of users in each company
• the number of users in each folder
• the status of the users (enabled/disabled indicated by the color of the icon)

Figure 1 - User Tree

Hierarchies, working groups and teams can be easily managed in X


XStudio
Studio by creating a structure
of folders hosting users.

3.1.1 Create a company


This first thing to do is to create your own company. To do this:
• in the tree, select the he root folder
• on the right panel, click on the create company button
• enter the name of the company and submit
• immediately, the company appears in the tree

Note: this company entity will be automatically shared in the SUT tree.
3.1.2 Create a hierarchy in the company
Then, it’s important to well define the working groups and teams in the company. This can be
achieved by creating a complete tree of folders and sub
sub-folders.
folders. To do this:
• in the tree, select a company
• on the right panel, click on the create folder button
• enter the name of the folder and submit
• immediately, the folder appears in the tree

Of course, a sub
sub-folder
folder can be created into a folder.
Redo the same operation until you are satisfied with the organisation.

3.1.3 Create a user


Now that you have a co company
mpany with its internal organization defined, you’re ready to create some
users. To do so:
• inn the tree, select a folder
• on the right panel, click on the create user button
• a dialog box including two tabs is displayed

• fill the Details tab with the user name, password, preferred language, email address and
location. Do not forget to check the Enabled checkbox.
location. checkbox. A disabled user cannot login into
the system.
• the Rights tab is here to select what this user will be able to do. For now, just check the
st checkbox corresponding to the root folder (this will grant automatically ALL the rights
first
to this user)
• click on submit
• immediately, the user appears in the tree

From now, you can exit XStudio,


XStudio, restart and login with the new user credentials.

3.1.4 Submit a n
new absence
bsence
Each user can enter some absences. After submission, absences have the status new.. Once the
manager has set them as approved, the status changes accordingly.

User’s absences are visible by selecting the Absences tab:


To enter a new absence:
• in the tree, select the user
select the Absences tab
• on the right panel, sel
• press the create absence button
• enter the type, start and stop dates and optionally a comment and submit
• immediately, the absence appears in the list

3.1.5 Edit an absence


If you are manager (have the rights to approve absences), then you can edit an absence and
change its status. To edit an absence:
• in the tree, select the user
select the Absences tab
• on the right panel, sel
• select an absence
• press the edit absence button
• enter the typ
type,
e, start and stop dates and optionally a comment and submit
• immediately, the absence appears in the list
3.1.6 Check the calendars
• in the tree, select the root folder or a user
• select the Calendar tab
on the right panel, sel
• the calendar is displayed. Two modes are available:
o Monthly mode
o Half-yearly
yearly mode
• You can select the mode by clicking on the appropriate toggle button:
button
o Monthly
Monthly:
o Half-yearly
yearly:
• You can also “move” the calendar by using the arrow keys.

Figure 2 - Calendar Monthly View


Figure 3 - Calendar Half-Yearly
Half Yearly View

3.1.7 Changing admin password


Note: The default admin user is configured with the password “password”. This is strongly
recommended to change this password as soon as possible. To do so:
n the tree, select the admin user (XQual/Administration/admin)
• in
• on the right panel ((Details sub-tab),
sub tab), enter the new password, confirm and submit
3.2 SUT Tree
Here is a typical SUT tre
tree.
e. It immediately shows a number of useful information:
• the total number of SUTs
• the number of SUTs in each company
• the number of SUTs in each folder
• the version of each SUT

Figure 4 - SUT Tree

A SUT is an abstract object representing the target we want to test. It can be a hardware device or
a software
ware component. The SUT must be detailed enough so that we can identify it easily.
A SUT should be uniquely defined through its name and version.

It is possible to group/organize the SUTs by creating a structure of folders and sub


sub-folders
folders hosting
the SUTs.

3.2.1 Create a SUT


Here is the process to create a new SUT:
• in n the tree, select a folder (create one if necessary)
• on the right panel, click on the create sut button
• enter the name and the version of the SUT and submit
• immediately, the SUT appears in the tr tree
3.2.2 Create a SUT Inheriting
Inheriting requirements from another SUT
Here is the process to create a new SUT inheriting requirements from another SUT:
• in the tree, select a folder (create one if necessary)
• on the right panel, click on the create sut button
• enter the name and the version of the SUT
• select the Requirements tab
• click the Preset import from SUT s settings
ettings button
• select the reference SUT and submit
• click the Preset import from SUT button
• (opt.) select/unselect some requirements if required
• submit
• immediately,
immediately, the SUT appears in the tree
3.3 Agent Tree
Here
e is a typical requirement tree. It immediately shows a number of useful information:
• the total number of agents
• the number of agents in each folder
• the operating system of each agent

Figure 5 - Agent Tree

All hosts on the network that will have XStudio of XAgent installed on MUST be included in the
tree to be able to execute some tests.

3.3.1 Add my local host


Here is the process to add your local host to the system:
• in n the tre
tree,
e, select a folder (create one if necessary)
• on the right panel, click on the create agent button
• enter the hostname or the static ip address of your local host and submit
• immediately, the agent appears in the tree

Note: what you enter in the Name field


ield MUST be ping
ping-able.
able. To check this, open a windows console
and ping it.
3.3.2 Add all necessary hosts
Redo the same for all hosts that will have XStudio or XAgent installed.
3.4 Requirement Tree
Here
e is a typical requirement tree. It immediately shows a numbe
numberr of useful information:
• the total number of requirements
• the number of requirements in each category
• the number of requirements in each folder
• the status of each requirement (indicated by the color of the icon)
• the priority of each requirement (indicated by the column)

Figure 6 - Requirement Tree

Requirements are all the conditions the SUT should be compliant with. Generally, the requirements
list is the first thing we do before working on detailed specification. In a perfect world, the SUT
should come with a list of requirements but this may not be the case.

Entering the requirements of the SUT is an optional task. We do encourage to completely defining
the requirements though. This information is very useful for coverage re
reporting.
porting.

You can add in the requirement tree the complete description of each requirement or you can
decide to just point to the external requirements document(s).

3.4.1 Create a category


This first thing to do is to create a category
category.. To do this:
• in the tree, select the root folder
• on the right panel, click on the create category
ategory button
• a dialog box is displayed

• enter the name of the category.


• enter the description of the category. You can use the formatting tools (wiki(wiki-style)
style) to format
the text. Later, in reports this text will appear correctly formatted.
• enter the launcher to be associated with that category: manual.jar .jar indicates that all the
tests that will be included under that category will be executed using the manual test
launcher.
• click on submit
• immediately, the category appears in the tree

Note: this category entity will be automatically shared in the Specification,


Specification Tests and Defect trees.

3.4.2 Create a requirement


Now that you have a category with its internal organization defined (see,, you’re ready to create
some requirements.
requirements. To do so:
• inn the tree, select a folder (create one if necessary)
• on the right panel, click on the create requirement button
• a dialog box is displayed

• pick a requirement type


• if required, pick a requirement category
• enter the name of the requirement
• enter the description of the requirement. You can use the formatting tools (wiki (wiki-style) to
format the text. Later, in reports this text will appear correctly formatted.
• select the status of the requirement ((at time,, you can only choose New or Ack)
at creation ti
• select the priority of the requirement
• click on submit
• immediately, the requirement appears in the tree

3.4.3 Edit a requirement


The requirements will probably be edited several times by different people and will go through a
complete workflow including three states:
states

Overlay Represents
New New requirement
Ack The requirement
equirement is being reviewed by the person who is supposed to sign it off
Approved The requirement has been approved

Table 4 - Requirements
Requirements Status

To edit a requirement:
• in n the tree, select a requirement
• on the right panel (details tab), edit the information you wish and submit

Note: Depending of the rights you’ve been granted, you may or may not be able to set the status to
a specific
specif state.
3.5 Specification Tree
Here is a typical specification tree. It immediately shows a number of useful information:
• the total number of specifications
• the number of specifications in each category
• the number of specifications in each folder
• the status
status of each specification (indicated by the color of the icon)
• the priority of each specification (indicated by the column)

Figure 7 - Specification Tree

Specifications are detailed and unitary description of a specific behavi


behavior..

You can add in the specification tree the complete description of each specification or you can
decide to just point to the external requirements document(s).

3.5.1 Create a specification


To create a specification:
specification
• inn the tree, select a folder (create one if necessary)
• on the right panel, click on the create specification button
• a dialog box is displayed
• check the formal checkbox if needed
• enter the name of the specification
• enter the description of the specification
specification.. You can use the formatting tools (wiki-style)
(wiki to
format the text. Later, in reports this text will appear correctly formatted.
• select the status of the specification (at time, you can only choose New or Ack)
at creation time,
• select the priority of the specification
• click on submit
• immediately, tthe
he specification appears in the tree

3.5.2 Edit a specification


The specifications will probably be edited several times by different people and will go through a
complete workflow including three states:

Overlay Represents
New New specification
Ack The specification
ification is being reviewed by the person who is supposed to sign it off
Approved The specification has been approved

Table 5 - Specification Status

To edit a specification:
specification
• In the tree, select a specification
• on the right panel (details tab), edit the information you wish and submit

Note: Depending of the rights you’ve been granted, you may or may not be able to set the status to
a specific state.
3.6 Project Tree
Here is a typical project tree. It immediately shows a number of useful information:
• the total number of projects
• the total number of sprints
• the total number of tasks
• the number of sprints in each project
• the number of tasks in each project
• the status of each sprint (indicated by the color of the project icon)
• if thehe tasks have been already affected to a sprint or not (indicated by the color of the task
icon – a grayed icon indicates that the task is already affected to a spint
spint)

Figure 8 - Project Tree

A project must be created for any new “product


product” you want to deliver.. A project is usually a
development project but can also be more generic
generic. Project
roject can be created for the development of
the main product line but also for automated testsuites and much other internal project in your
company.

A project is made of tasks.


tasks. Most of the time, a task is the smaller entity that a developer can
develop (a feature is made of several tasks).

The scrum methodology defines the notion of sprints.sprints. A sprint is the result of any iteration in a
project. So,
o, to deliver a product, you will deliver several intermediate releases, each corresponding
to a sprint. A sprint is generally at most 2 or 3 weeks long. A number of tasks will be associated to
each sprint and at the end of the sprint a demo of all the fea
features
tures developed can be done.

3.6.1 Create a project


To create a project:
• in the tree, select a folder (create one if necessary)
• on the right panel, click on the create project button
• a dialog box is displayed
• enter the name of the project
• enter a focus ratio (this corresponds to the percentage of time that people will spend on
effective work)
• enter the description of the project
project.. You can use the formatting tools (wiki
(wiki-style)
style) to format
the text. Later, in reports this text will appear correctly formatted.
• click on submit
• immediately, the project appears in the tree

3.6.2 Create a task


To create a task:
• in the tree, select a folder belonging to a project (create one if necessary)
• on the right panel, click on the create task button
• a dialog box is displayed
• enter the name of the task
• enter the description of the tasktask.. You can use the formatting tools (wiki
(wiki-style)
style) to format the
text. Later, in reports this text will appear correctly formatted
• enter the priority of this task
• enter the estimated effort in man.days (peopl(peoplee working 100% of their time)
• click on submit
• immediately, the task appears in the tree

3.6.3 Create a sprint


To create a sprint:
• in the tree, select a project
• on the right panel, click on the create sprint button
• a dialog box is displayed
• enter the name of the sprint
• enter the description of the sprint. You can use the formatting tools (wiki
(wiki-style)
style) to format
the text. Later, in reports this text will appear correctly formatted.
• select the status of the sprint (at creation time, you can only choose Idle or Running
Running)
• select the start and stop dates for this sprint
• click on submit
• immediately, the sprint appears in the tree

3.6.4 Edit a sprint


The sprints will probably be edited several times by different people and will go through a complete
workflow including thr
three
ee states:

Overlay Represents
Idle Idle sprint
Running The sprint is currently running
Finished The sprint if finished

Table 6 - Sprint Status

To edit a sprint:
sprint
• In the tree, select a sprint
• on the right panel (details tab), ed
edit
it the information you wish and submit
• the duration (in effective man.days) is automatically updated

Note: Depending of the rights you’ve been granted, you may or may not be able to set the status to
a specific state.

3.6.5 Allocate some resources to a sp


sprint
rint
All the tasks associated to a sprint will be performed by a pool of resources. These resources are
users (taken from the user tree). To allocate some resources to a sprint:
• in the tree, select the sprint
select the Resources tab
• on the right panel, sel
toggle the select filter button
• un-toggle to display the complete user tree
• check all the users that need to be allocated to the current sprint and indicate by editing the
percentage of availability of each resource
re toggle the select filter button
• (opt.) re-toggle to display only the selected users
• click on submit
Note: You can check if some of the resources are overloaded by checking their calendars from the
user tree.

3.6.6 Associate some tasks to a sprint


A sprint will contain some tasks
tasks.. To associate some tasks to one sprint:
• in n the tree, select the sprint
• on the right panel, selselect the Backlog (this is the terminology in Scrum) tab
toggle the select filter button
• un-toggle to display the complete task tree
• check all the tasks that need to be associated to the current sprint
• (opt.) re-toggle
re the select filter button to display only the selected tasks
• click on submit
3.6.7 Daily update progress of the tasks of a sprint
On a daily basis, you can update the progress of the tasks in a sprint:
• in the tree, select the sprint
select the Backlog tab
• on the right panel, sel
• move the progress bar of each task. While you’re moving the sliders:
o the percentage of progress is update in the next column
o the equivalent in days is updated in the next column
o the last column indicates the esti
estimated
mated number of days for this task
• click on submit

3.6.8 See the velocity charts/check the status of the sprint


• in the tree, select the sprint
• select the Velocity tab
on the right panel, sel
• some graph are displayed showing several useful information:
The top graph shows:
• the theoretical curve of progress (the dashed red line)
• the cumulated breakdown of all the tasks in this sprint

he bottom graph shows:


The
• the theoretical curve of progress (the dashed red line)
• the two curves representing the deal (top blue line)line) and real (bottom blue line) curves
representing what we can do theoretically with the resources allocated
• the status of the project:
o Red area means that the project is at risk
o Green means that the project is in a good shape
3.7 Test Tree
Here is a typica
typicall test tree. It immediately shows a number of useful information:
• the total number of tests and testcases
• the number of tests and testcases in each category
• the number of tests and testcases in each folder

Figure 9 - Test Tree

This screen is a bit different from the others since the left panel includes 2 separate areas:
• thehe usual tree including all the tests that we have on the system
• a sub-tree
sub tree on the bottom that will list the testcases corresponding to the currently selected
test
est

Tests can be arranged by placing them in specific folders. For instance, the tests can be organized
the same way as the specification
specifications. But iff you’re testing a software API, it’s a common way of doing
to have one test per function/method and group these these functions by groups.
groups You can then extend
your test suite by adding stress test or negative tests etc. so probably need additional folders.

Tests and testcases are obviously what is the most important in the Data Model definition. The
normal process is to start from specifications and make as many tests and testcases as necessary
to check that specifications are all properly functioning.
3.7.1 Create a test
To create a test:
• inn the tree, select a folder (create one if necessary)
he create test button
• on the right panel, click on tthe
• a dialog box including three tabs is displayed

• fill the Details tab with the name and priority of the test (leave the canonical path blank).
blank)
• fill the Testplan tab with the prerequisites and the general description of the tes test.
t. You can
use the formatting tools (wiki-
(wiki-style)
style) to format the text. Later, in reports this text will appear
correctly formatted.
• Pick one user in the Author tab that will be registered as the author or the test.
• click on submit
• immediately, the test appears rs in the tree

3.7.2 Associate a test to specifications


A test is probably coming from at least one specification. To associate a test to one or several
specifications:
• in n the tree, select the test
• on the right panel, select the Specifications tab
toggle the select filter button
• un-toggle to display the complete specification tree
• check all the specification that need to be associated to the current test
• (opt.) re-toggle
re the select filter button to display only the selected specifications
• click on submit

3.7.3 Create a testcase


We already mentioned that, for instance, a test could verify a specific function of an API. But there
are a lot of things to check to validate one single function. You may want to test all combination of
parameters and check that the result is cor
correct.

One testcase must be able to check one specific function in some particular conditions. The sum of
all the testcases makes one test. To have a test succeeding, all the testcase must succeed.

To create a testcase:
• inn the tree, select a test
right panel, click on the create testcase button
• on the right
• a dialog box including two tabs is displayed
• fill the Details tab with the index (defining the order in which the testcases will be executed
within a test),
test), name and general description of the testca testcase. You can use the formatting
tools (wiki-style)
(wiki style) to format the text. Later, in reports this text will appear correctly formatted.
Do not forget to check the Implemented checkbox. Non Non-implemented
implemented test are NOT
executed when running a campaign. In case of ma manual
nual test, always check the
Implemented checkbox.
• select the Testplan tab and define all the steps and checks that will be needed in this
testcase:
o add a step:
step
 in the tree, select the root folder
 click on the create step button
 enter the description of tthe
he step and submit
o add parameters (opt) (opt):
 in the tree, select the parameters node
 click on the create parameter button
 enter the description of the parameter and submit
 repeat the operation if you need to specify more parameters
o add checks (opt.):
 in the tree,, select the checks node
 click on one of the create boolean operators buttons
 click on the new operator and click on the create check button
 enter the description of the check and submit
 repeat the operation if you need to specify more checks (you can mix as
many different b boolean
oolean operators
operator as you want)
o repeat the operation for every step and submit
• click on submit
• immediately, the testtestcase appears in the sub-tree
sub tree on the left panel

3.7.4 Mofify the testplan of a testcase


To modify the testplan of a test
testcase:
• in the tree, select a test
• in the sub
sub-tree,
tree, select a testcase
• on the right panel ((Testplan tab), select the node in the tree you wish to modify
• click on the edit a node button
• apply the changes and submit
3.8 Campaign Tree
Here is a typical campaign tree. It immediately shows a number of useful information:
• the total number of campaigns and campaign sessions
• the number of campaigns and campaign sessions in each category
• the number of campaigns and campaign sessions in each folder
• the status (stopped, paused, running, idle) of each campaign session (indicated by the
overlay on the icon)
• the start and stop date and time of each campaign session

Figure 10 - Campaign Tree

Once all the tests have been defined and implemented, you you have to run them. To do so, you’ll first
have to gather all the tests you which to run in a campaign. For instance, it may be of interest to
define a campaign including all the tests of a specific category (or just a subset). A campaign is by
definition just an ordered list of tests.
tests

At this stage, you would be able to run a campaign but what happens if you run a campaign several
times? of course you want to be able to retrieve results from each runs of a campaign. Here comes
the Campaign Session.

You can create from a campaign as many campaign sessions as you want. This allows to archive
independently all the results and report of each execution of the tests.
3.8.1 Create a campaign
To create a campaign:
• in the tree, select a folder (create one if necessar
necessary)
• on the right panel, click on the create campaign icon
• a dialog box including two tabs is displayed

• fill the Details tab with the name of the campaign


• in the Content tab,
tab, select all the tests you want to be part of this campaign
• click on submit
• immediately, the campaign appears in the tree

3.8.2 Order the tests in the campaign


• in the tree, select the campaign
• on the right panel, select the Order tab

• select one or several tests in the list and use the move buttons to position the
test(s) wherever you want in the list
• click on submit
3.8.3 Create a campaign session
To create a campaign session:
session
• in the tree, select a campaign
• on the right panel, click on the create campaign session button
• a dialog box including seven tabs is displayed

• fill the Details tab with the name


ame of the session.
session
• (opt.) select the Test operator tab and pick the user who will run the session
• select the Agent tab and pick the agent on which the session will be run. run. If you wish to run
the tests on the local host, lea
leave
ve the default selection (that should already match the local
host)
• select the SUT tab and pick the SUT on which the session will be run
• select the Configuration tab and pick the configuration you which for each category
involved in this campaign session. Once a campaign session is created, it is impossible (by
purpose) to change the configurations associated. If no configuration are available:
o click on the create configuration button
o a dialog box is displayed
o enter the name of the configuration
o fill in allll the forms displayed and submit
o the launchers (xml and jar files) needed for this campaign session MUST be
accessible in the bin/launchers folder
• (opt.) if needed, modify the dynamic attributes values in the Attributes tab
• (opt.) select the CC Emails tab and check some users to receive a notification when the
campaign session will be completed
• click on submit
• immediately, the campaign session appears in the tree in idle state

3.8.4 Run a campaign session


To run a campaign session
session:
• in the tree, select a campaig
campaign n session
• on the right panel
panel, click on the start button
• immediately, the Test campaign details screen appears and will display real time the
results of testing

• If the campaign includes tests to be executed manually (the tests are part of one or sever
several
categories where you choose manual.jar or simple_manual.jar as launcher), then you will
get additional dialog boxes such as:
3.8.5 See the execution details
To see the details of the campaign session execution:
• in the tree, select a campaign session
• on tthe panel, select the Content tab
he right panel

• the screen is split in three different areas


o a test tree showing the tests included in the campaign with their results
results. It shows a
number of useful information:
 the total number of tests that succeeded or failed
success
failed
unknown
 the number of tests that succeeded or failed in each category
 the number of tests that succeeded or failed in each folder
o a testcase sub-tree
sub showing the result of each testcase
success
failed
not executed
relative (some tests – i.e. performances - may just return some values
that will need to be analyzed by an operator)
o a message area showing the details of execution of one specific testcase

The process to get all the details of an execution is the following:


• click on a test
• the testcase subsub-tree
tree is updated showing the associated testcases results
• click on a testcase
• the message area show the details of each step during the testcase execution

3.8.6 Get the results


To see the results of the campaign session execution:
• in the ttree,
ree, select a campaign session
panel, select the Results tab
• on the right panel

The screen is split in two columns:


• the Tests column gives the statistics/results only based on the tests results
• the Testcases column gives the statistics/results only based on the testcases results

Each column immediately shows a number of useful information:


• in the header tables:
o the percentage of success, failure etc. of tests/testcases
o the coverage of the execution (based on the number of tests/testcases that were
not executed)
execut
• in the pie charts:
o the number of success, failure etc. of tests/testcases
o the percentage of success and failure (after removal of the unknown, relative or not
executed tests/testcases)

3.8.7 Check the progression of a campaign


A Statistics tab is present on any campaign
campaign.. This tab shows the progression/regression
progression/regression of this
campaign. The information is taken from all the sessions executed belonging to the campaign. All
the results are put in a graph so that the evolution (progression or regression) in time is clclearly
visible. The last session's results are also displayed. The new tab is split in 2 tabbed panes: one
that shows the evolution of tests results and another one going deeper at the testcase level:
3.9 Defect Tree
Here is a typical defect tree. It imme
immediately
diately shows a number of useful information:
• the total number of defects
• the number of defects in each category
• the number of defects in each folder
• the status of each defect (indicated by the color of the icon)
• the severity of each specification (indicat
(indicated
ed by the column)
• the priority of each specification (indicated by the column)

Figure 11 - Defect Tree


Executing some tests (running some campaign session) makes you able to generate reports. It’s
good to have a static view of what is working and what is not but it would be even better to link the
tests that failed to actual defects.
defects Hence,
Hence, several failing testcases may be due to only one single
defect.

At this point, a report analysis must be done by the test operator. Thi
Thiss analysis should lead to
associate all failed testcases to some defects.

3.9.1 Create a defect


To create a defect:
• in the tree, select a folder (create one if necessary)
• on the right panel, click on the create defect button
• a dialog box including three tabs is displayed

• fill the Details tab with the name, description, steps to reproduce, reproducibility, platform,
operating system, status, severity and priority
• pick one user in the Assigned to tab who will be registered as the one assigned to resolve
thee defect
• in the Found in tab, check all the SUTs on which this defect can be observed
• click on submit
• immediately, the defect appears in the tree

3.9.2 Edit a defect


The defect will probably be edited several times by different people and will go through a ccomplete
workflow including five states:

Overlay Represents
New New defect
Assigned The defect has been assigned to a person to resolve it
Ack The defect is being investigated
Resolved The defect has been declared as resolved
Closed The fix has been verified
verified and the defect has been closed

Table 7 - Defect Status

To edit a defect:
defect
• In the tree, select a defect
• on the right panel (details tab), edit the information you wish and submit

Some fields will be accessible only when a cert


certain
ain status is reached:

Assigned:
Correction target date

Ack:
Correction target version
Completion %

Resolved/Closed:
Resolved/Closed
Correction type
Correction description
Correction patches

Note: For better tracking purposes, when setting a defect’s status to Resolved
olved,, you should also pick
one SUT in the Fixed in tab.

3.9.3 Link a failed test to a defect


To link a failed test to a defect:
• switch to the Campaigns tab on the left panel
• in the tree, select a campaign
panel, select the Content tab
• on the right panel
• select a testest with status failed or unknown
• click on the Link to a defect button
• a dialog box is displayed
toggle the select filter button
• un-toggle to display the complete defect tree

• check all the defects that need to be associated to the test execution
• oggle the select filter button
(opt.) re-toggle to display only the selected requirement
• click on submit
4. Getting more into the d
details
etails
4.1 Localization
XStudio is entirely localized. There are currently 4 languages supported:
• English
• French
• Italian (partially)
• Spanish (partially
rtially)

To have XStudio running in a specific language, you just need to set your profile with one on those
four languages:
languages

Att the next login, the application


application will show up appropriately localized:
You can also override this setting during login b
by forcing a specific language
language::

4.2 Internationalization
A user is given a language (localization) which is used to display the application with a specific
language but is also associated with a location (internationalization) which is used to know which
public holidays and week
week-ends
ends this user will benefit from.

Each country can be easily configured: public holidays can be added, edited and deleted.
deleted In
addition,
ddition, each country is associated to some week
week-ends
ends settings. Hence, most of the countries in
the world are using Saturday and Sunday for the weekweek-ends
ends but some others (such as Israel,
Qatar, Algeria etc.) are using Friday and Saturday and again some others (such as Saudi Arabia)
Thursday and Friday.

The default settings are supposed to be correct but it's good to let the administrator of the system
customizes it if necessary. These settings are accessible through the Settings menu entry.

Hence, all the calendars are affected by this change as each user may have different week-ends.
week

Absences are also inclu


included
ded in the calendars so you have a complete picture of the p
projects
rojects or the
staff workload, sstatus
tatus are then easier to figure out.
Week-ends
ends and holidays are easily identifiable as they are in black and grey

Note that the calendar tree is still expandable as usual. This greatly facilitates the reading i.e. when
you want to know the details about why a user is overloaded in a certain time frame.

A legend (including some gradient colors for the workload) has been added to help the reading
reading.

4.3 Change Tracking


XStudio
Studio allows tracking any changes on the database. This feature is particularly useful to track
modifications that occurred on:
• requirements
• specifications
• tests/testcases
• defects

To check the history of an element, click on it and select the Changes tab on the right panel. The
panel shows a table gathering all the dated changes. Here are for instance changes you could see
on a defect:

Note: You can edit directly the details of a user by clicking on his name in the changes table.
4.4 Testing Coverag
Coverage/Traceability
/Traceability Matrix
One of the major interests in using test management tool is the ability to track the coverage of your
testing. As you’ve already seen in previous chapters, requirements are linked to specifications and
specifications are linked to tes
tests.
ts. From this data, it is possible to generate some coverage metrics.

Those metrics
trics are computed per category and are retrievable from requirements, specifications
and tests.

4.4.1 Global requirements traceability m


matrix
To get global requirements traceability matrix:
• switch to the Requirements tab on the left panel
• in the tree, select the root folder
panel, select the Traceability Matrix tab
• on the right panel
• the requirement tree appears including in the right column, all the specifications covering
each requiremen
requirement.

This traceability matrix is also present in the requirement book (that can be generated from
XStudio).

4.4.2 Global specification


pecification
pecificationss traceability m
matrix
To get global specifications traceability matrix:
• switch to the Specifications tab on the left panel
• in the tree, select the root folder
panel, select the Traceability Matrix tab
• on the right panel
• the specification tree appears including in the right column, all the tests covering each
specifications.
This traceability matrix is also present in the specificat
specification
ion book (that can be generated from
XStudio).

4.4.3 Detailed requirements


requirements coverage
overage
To get requirements coverage metrics:
• switch to the Requirements tab on the left panel
• in the tree, select a category or a folder
panel, select the Coverage tab
• on the right panel
• two tabs By specifications and By tests are displayed. Each shows useful information
such as:
o percentage of the coverage
o the list of requirements covered (at least partly) and not covered by respectively
some specifications or tests

4.4.4 Detailed specificat


specifications coverage
overage
To get specifications coverage metrics:
• switch to the Specifications tab on the left panel
• in the tree, select a category or a folder
• panel, select the Coverage tab
on the right panel
• the panel shows useful information such as:
o percentage of the coverage
coverage
o the list of specifications covered (at least partly) and not covered by some
specifications

4.4.5 Detailed tests


ests coverage
overage
To get tests coverage metrics:
• switch to the Tests tab on the left panel
• in the tree, select a category or a folder
panel select the Coverage tab
• on the right panel,
• the panel shows useful information such as:
o percentage of the coverage
o the list of tests covering (at least one)
one) and not covering some specifications
4.4.6 Detailed campaign coverage
Since a test campaign is by definition a gro
group
up of test, it is possible to get some coverage metrics as
well.

To get campaigns
campaigns coverage metrics:
• switch to the Campaigns tab on the left panel
• in the tree, select a campaign
panel, select the Coverage tab
• on the right panel
• the panel shows useful information such as:
o percentage of the coverage
o the list of specifications fully, partially or not covered by the campaign
o the list of requirements fully
fully,, partially or not covered by the campaign
4.5 Scheduling
Sc ing test execution
When some tests are aimed at being exec executed
uted completely automatically (using some specific
automatic launcher), you can schedule a campaign to be executed on a regular basis. To do this,
you need to create a Schedule
chedule.. Then, at the right time, a campaign session will be automatically
created and executed by XSTudio.

To create a schedule:
• in the tree, select a campaign
• on the right panel, click on the create schedule button
• a dialog box including seven tabs is displayed

• fill the Details tab with the name and the description of the schedule
• in the Scheduling tab, select the days and the time when you want the sessions to be
created and executed. Don’t forget to check the Enabled checkbox. A disabled schedule
will not generate any test execution
• (opt.) select the Test operator tab and pick the user who will (virtually) run the session
sessions
• select the Agent tab and pick the agent on which the session
sessions will be run. This agent
needs to have XAgent installed and running
• select the SUT tab and pick the SUT on which the session
sessions will be run
• select the Configuration tab and pick the configuration you which for each category
involved
nvolved in this schedule. Once a schedule is created, it is impossible (by purpose) to
change the configurations associated. If no configuration are available you will nee need to
create one
• (opt.) select the CC Emails tab and check some users to receive a notification when the
campaign session
sessions will be completed
• click on submit
• immediately, the schedule appears
ppears in the tree

When the time will come for an agent to execute tthe he tests, the agent will create the campaign
session and then execute it. From any station hosting XSTudio, after a refresh, the campaign
session will appear under the schedule node in the tree.

4.6 Tracking the Test Implementation


As already seen, when addin
adding
g a test in the system, you can specify if the test is implemented or
not. This parameter is tracked so that some reports can be generated. This helps for example in
knowing exactly what the status of the automation development is.

To control the progress of the test implementations:


• switch to the Tests tab on the left panel
• in the tree, select a category
panel, select the Statistics tab
• on the right panel
The panel is split in two different areas:
• a table displaying:
o for each date, the number of tests and ttestcases
estcases implemented
o for each date, the percentage of tests and testcases implemented
• a series of graph displaying:
o the evolution of the number of tests and testcases implemented
o the evolution of the percentage of tests and testcases implemented

4.7 Defects R
Reporting
eporting
A defect tracking system is good only if it provides a simple and efficient way of retrieving
information about them. XStudio generates extensive reporting on defects.

To get those reports,


• switch to the Defects tab on the left panel
• in the tree, select a category
panel, select the Statistics tab
• on the right panel
• choose a range of dates using the date pickers

Then a huge number of reports can be retrieved.


• on the active defects (New,
( A
Assigned, Ack
Ack)
( ssigned to or Reported
o per user (Assigned eported by
by)
 Status,
S Severi
everity or Priority
riority trends
o major defects trends ((Blocking, Major)
Major
o minor defects trends ((Minor, Cosmetic,
Cosmetic Enhancement)
Enhancement
o high priority defects trends ((High)
(Normal, Low)
o low priority trends (Normal
(Resolved, Closed
• on the passive defects (Resolved losed)
o idem
• on all the defects
o idem
• submission/resolution
bmission/resolution rates trends
4.7.1 Per-user
Per user reports
To get per-user
user reports,
• select
elect the tab corresponding to the desired group of defects:
o Active Defects (NewNew, Assigned and Ack)
o Passive Defects (Resolved
Resolved and Closed)
C
o All Defects
• select if you want some data about defects Assigned to or of Reported by a specific user
• select a name in the user list
• select which kind of information you’re interested in:
o Status
o Severity
o Priority
• The report will display the corresponding trend
trends and an additional pie chart represe
representing
the current status

4.7.2 Per-severity
Per severity reports
To get per-severity
severity reports,
• select
elect the tab corresponding to the desired group of defects:
o Active Defects (NewNew, Assigned and Ack)
o Passive Defects (Resolved
Resolved and Closed)
C
o All Defects
• select if you want some d data
ata about
o Major defects (Blocking
(Blocking, Major)
o Minor defects (Minor
(Minor, Cosmetic,, Enhancement)
Enhancement
• The report will display the corresponding trend
trends and an additional pie chart representing
the current status
4.7.3 Per-priority
Per priority reports
To get per-priority
priority reports,
• selec
electt the tab corresponding to the desired group of defects:
o Active Defects (New New, Assigned and Ack)
o Passive Defects (Resolved
Resolved and Closed)
C
o All Defects
• select if you want some data about
o High Priority defects (High)
(
o Low Priority defects ((Normal and Low)
• The repo
reportrt will display the corresponding trend
trends and an additional pie chart representing
the current status
4.7.4 Submission/Resolution rates
To get submission/Resolution rates reports,
elect the Resolution Rates tab
• select

The top part of the panel shows the progres sion of the number of Active and Passive defects as
progression
well as the metrics as of now and as of the last record.
The bottom part of the panel shows the submission and resolution rates.

4.8 Attachments
You have the ability to attach somesome files (whatever the forma
format is) to some entities managed by
XStudio.. This is an extremely powerful way of centralizing/sharing documents. Attachments can be
created to the following entities
entities:

Requirements tree:
• category
• folder

Specifications tree:
• category
• folder

Test tree:
• category
cate
• folder
• test
• testcase

Campaign tree:
• campaign session
• testcase execution (only uploadable programmatically by the launcher/test)

Defect tree:
• defect

The GUI is extremely simple:


simple

Whenever it’s possible, two panels are displayed: one for attachmen
attachments
ts directly attached to the
entity and another one for the attachments inherited from the ancestor nodes. It is possible to mo
move
directly to one of these parents by clicking on the anchors.
anchors
4.8.1 Add an attachment
To add an attachment:
• click on the Create attach
attachment
ment button
• a dialog box is displayed
• pick a file and submit

4.8.2 Download/Open an attachment


To download/open an attachment:
• select an item in the list
• click on the Download attachment / Open attachment button

4.9 Generate documentation


4.9.1 Requirement
Requirements books
book
It may be interesting to export or to print a list of requirements. With XSTudio you can export
partial or complete requirement books.

To do so:
• switch to the Requirements tab on the left panel
• in the tree, select a category or a folder
• click on the Create
ate report button
• first select the destination folder for the book

• a dialog box is displayed

• select the extension of the book (HTML or XML


XML). If you choose to generate the book in
HTML you can select one ttype of book. Each type corresponds to one specific XSLT
transform (one .xslt file) so that yyou
ou can add your own if you wish (see later in this
document to know how to do that). By default, two XSLT are provided along with XStudio
RawResult - the more detailed - and TraceabilityMatrix
(RawResult TraceabilityMatrix):
• provid the filename for the book
provide
• submit
• after it has been generated, the book is automatically opened:

4.9.2 Specification
pecification
pecifications books
To generate specification books, follow the same instructions than those for generating
requirement books (but from the Specificatio
Specifications tab on the left panel).
4.9.3 Projects books
To generate projects books, follow the same instructions than those for generating requirement
books (but from the Projects tab on the left panel).

4.9.4 Testplans
estplans
To generate testplans,, follow the same instructi
instructions
ons than those for generating requirement books
(but from the Tests tab on the left panel).
4.9.5 Test
est reports
To generate test reports,, follow the same instructions than those for generating requirement books
(but from the Campaigns tab on the left panel
panel).
4.10 Customizing the documents
All the documents that you can generate from XStudio are customizable.

4.10.1 Changing the logo


All the documents generated from XStudio include a logo on the upper top side of it. This logo can
be changed very simply:
• copy
opy you
yourr logo picture file (i.e. gif, jpg, png etc.) to the following folders:
o requirements
o specifications
o testplans
o reports
• from the menu, select Settings > General
• in the company logo text field, enter the name of the picture file and submit
• from now, all tthe
he documents that you wil generate will have your own logo in the header:

4.10.2 Customizing
Customizing the reports
All documents that are generated by XSTudio are first internally generated in XML then possibly
transformed into a different format using specific XSLT tr
transforms
ansforms:

For instance, let’s imagine you want to generate a custom requirement book.
book You can create your
own XSLT file. For example, you can take the file requirement
requirementHTML_RawResult.xslt
HTML_RawResult.xslt as a basis
and modify it as much as you want, rename it requirementHTML_My_Own_Report1.xslt
requirementHTML_My_Own_Report1.xslt and
ensure it is located along with the other .xslt files (in the export/xsl folder – either locally if using
only a standalone install or on the Apache/Tomcat server to share this report with everybody
connecting to this server).
server

From now, when you will try to generate a requirements book, the type My_Own_report1 will be
available in the combo box:

The
he internal process is then the following:
• several SQL request are made to the database to get all the necessary information
• a structured XML document including all the required data is generated
• if the user chose XML,, the XML is saved on the disk in the requirements folder with the
correct name
• if the user chose HTML, the XML document is transformed using the XSLT
XSLT corresponding
to the selected type
ype.

The only constraint is to use a specific naming convention: <type>HTML_<name>.xslt


where:
• <type> can be:
requirement"
o "requirement
specification"
o "specification
o "task"
testplan"
o "testplan
o "report"
o "defect". ".
• <name> can be any name including alphanumeric characters plu pluss underscore and dash
signs.

4.11 Searching
4.11.1 By name
All the trees in XStudio include an indexing system allowing to search real
real-time
time an entity in the tree
by his name. tart entering some text in the Search field
name To experiment it by yourself just sstart of the
tree
To automatically select a searched item in the tree, you can move using the arrows keys within the
suggestion list and validate by pressing Enter or just click on the item in the list.

4.11.2 By Id
managed by XStudio.
The data model gives unique identifier for each entity managed XStudio. The Ids are on the
form:

Template Represents
R_<id> Requirement
S_<id> Specification
T_<id> Test
TC_<id> Testcase
C_<id> Campaign session
D_<id> Defect
TA_<id> Tasks

Table 8 - Unique Identifier Templates

ds are also available from XStudio’s


Ids XStudio’s GUI (in Details tab you have an Identifier field that
corresponds to <id> in the templates.

To search for an entity from its Id, just type the Id in the Search Id field and
validate.

4.11.3 Advanced and plain text search


Searching
earching items such as requirements, specifications
specifications,, tasks, tests or defect based on a
combination of some very specific criteria can be not only very useful but extremely important to
control/guarantee the quality of the products.

4.11.3.1 Requirements
Procedure:
• in the requirement tree, select a category
panel, select the Search tab
• on the right panel
Status and Priority)
• check all the checkboxes (Status Priority that match your search criteria
• (opt.) type some text in the Text to search field and press Enter
Note: The results list is updated real-time
real time and you can reach one particular defect by just clicking on
it. The results table can be ordered by clicking on the column headers.

4.11.3.2 Specifications
Procedure:
• in the specification tree, select a category
panel, select the Search tab
• on the right panel
Status and Priority)
• check all the checkboxes (Status Priority that match your search criteria
• (opt.) type some text in the Text to search field and press Enter

4.11.3.3 Tasks
Procedure:
• in the project tree, select the root folder
• panel, select the Search tab
on the right panel
• type some text in the Text to search field and press Enter

4.11.3.4 Tests
Procedure:
• in the test tree, select the root folder
panel, select the Search tab
• on the right panel
• type some text in the Text to search field and press Enter

4.11.3.5 Defects
Searching defects based on a combination of some very specific criteria is extremely important to
control the quality of the product.
To do so:
• in the defect tree, select a category
panel, select the Search tab
• on the right panel
Reported by
• check all the checkboxes (Reported by, Assigned
signed to
to, Found in,
in Fixed in
in, Status,
Severity and Priority)
Priority that match your search criteria
• (opt.) type some text in the Text to search field and press Enter

4.12 Importing data


4.12.1 From CSV
If you are migrating from manual test management - using a basic spreadsheet
spreadsheet application to
XStudio - you may want to import all your tests and testcases from CSV text files (indeed, all
spreadsheet programs provide easy way to export to CSV). The source CSV file must use semi
semi-
column (‘;’)
’) delimiter.

4.12.1.1 Test and testcases


To import tests and testcases from a CSV file
file:
• from the menu, select File > Import from CSV
• a dialog box is displayed
• pick the Tests and Testcases (without testplan) option
• click on the Open button and select the file you wish to import
• the raw data area displays
displays the content of the file
• click on Submit
After the import task has completed, the tests and testcases appear immediately in the test tree.

If you wish to import tests and testcases including the testplan,


testplan, the syntax of the CSV file will of
course be more complex (a testcase is then defined on several lines and the tabular character is
used as well as the semi
semi-column
column delimiter).

To do so:
• from the menu, select File > Import from CSV
• a dialog box is displayed
• pick the Tests and Testcases (with testplan)
estplan) option
• click on the Open button and select the file you wish to import
• the raw data area displays the content of the file
• click on Submit
4.12.2 From XML
The XML format used is the same as the one used to export tests/testcase in XML.

4.12.2.1 Tests and testcases


tes
To import tests and testcases from XML:
• from the menu, select File > Import from XML
• a dialog box is displayed
• pick the Tests option
• click on the Open button and select the file you wish to import
• the raw data area displays the content of the file
lick on Submit
• click

4.12.2.2 Requirements
To import tests and testcases from XML:
• from the menu, select File > Import from XML
• a dialog box is displayed
• pick the Requirements option
• click on the Open button and select the file you wish to import
• the raw data area displays
displays the content of the file
• click on Submit
4.12.2.3 Specifications
To import tests and testcases from XML:
• from the menu, select File > Import from XML
• a dialog box is displayed
• pick the Specifications option
• click on the Open button and select the file you wish
wish to import
• the raw data area displays the content of the file
• click on Submit
5. The expert
expert’s
’s corner
5.1 Tests attributes
You can create and associate custom attribute to your tests. These attributes can be used later in
filters to automatically select some tests matching some specific criteria.

There are two types of attributes: static and dynamic. Dynamic attributes are identified with the
icon. A dynamic attribute is an attribute that you can overwrite when creating a campaign session.

5.1.1 Associate some attributes to a test


To associate attributes to a test:
• in the tree, select a test
• on the right panel, select the Attributes tab
toggle the select filter button
• un-toggle to display the complete attributes list
• check the attributes that need to be associated to the current test test.
• give a value to each of these attributes. Depending of the type of the attribute the value
field will be different:
o Boolean a check box
Boolean:
o Integer a text field accepting a combination of number, range of numbers, list
Integer:
of numbers.
 you can use the character “:” to define a range of numbers (i.e. “10:15”
will define all numbers included in between 10 and 15. This is equivalent to
“10;11;12;13;14;15”
 you can also use the character ““;”” as a delimiter of several numbers
 both can be combined (i.e. you can use strings like “-4;34:56; 4;34:56;-2:-
1;67:68”
o String a text field accepting a string or a list of strings.
String:
 you can use the character ““;”” as a delimiter of several strings (i.e. “foo
bar;john doe;jane doe”
• re toggle the select filte
(opt.) re-toggle filter button to display only the selected attributes
• click on Submit

5.1.2 Create and associate a new attribute to a test


To create a new (not already existing) attribute and associate it to a test:
• in the tree, select a test
• on the right panel, select the Attrib
Attributes tab
toggle the select filter button
• un-toggle to display the complete attributes list
• click on the create
reate attribute button
• a dialog box is displayed
• enter the name of the attribute
• choose the type for this attribute
• Select the checkbox dynamic if you thi this
s attribute to be over
over-writeable
writeable at campaign session
creation time

• click on Submit
• immediately, the attribute appears in the tree
• check this attribute
• give a value to this attribute

• re toggle the select filter button


(opt.) re-toggle to display only the sselected
elected attributes
• click on Submit

5.1.3 Edit an attribute


To rename an attribute or change its ”dynamic” property:
• in the tree, select a test
• on the right panel, select the Attributes tab
• toggle the select filter button
un-toggle to display the complete attributes list
• select the attribute you want to edit
• click on the edit attribute button
• a dialog box is displayed
• enter the new name of the attribute
attribute,, change the type of the attribute or select/unselect the
dynamic property (note that if you change the type of the attribute, all the former
associations with some tests will be deleted)

• click on Submit
• immediately, the modified attribute appears in the tree

5.1.4 Create a campaign based on attribute filter


When you gave a lot of tests, it may be interesting to preselect
preselect all tests having certain attribute
values.

To create a campaign based on attribute filter:


• in the tree, select a folder
• on the right panel, click on the create campaign icon
• a dialog box including two tabs is displayed

• fill the Details tab with


with the name of the campaign
• in the Content tab
o click on the preset filter settings button
o a dialog box is displayed
 select the root folder and add a AND operator
 click on the add an expression button
 a dialog box is displayed
 enter the expression “new
new attribute” starts with “value for”
 click on submit

o click on submit
o click on the preset filter button
o filtering will be performed and the content tab will highlight and pre
pre-select
select all the
tests matching the filter. You can of course unselect or se
select
lect new tests from this
list

• click on submit
• immediately, the campaign appears in the tree

5.2 Dependencies between tests


It is possible to add dependencies between tests. One test can be the parent of one or several
tests. The reverse is also true.
Ultimately,
imately, XStudio will be able to use this information to:
• execute only child tests if the parents are all successful
• override the order of execution of the tests

To create some dependencies between tests:


• in the tree, select a test
select the Dependencies tab
• on the right panel, sel
• select the Child of (or Father of)of tab
toggle the select filter button
• un-toggle to display the complete test tree
• check all the tests that are childs of (or fathers of) the current test
re toggle the select filter button
• (opt.) re-toggle to di
display
splay only the selected requirement
• click on submit

In this example, Test1 will be child of Test2 and Test3.


Test3

5.3 Test execution history


One very interesting option is also to check the history of all the executions of one specific test. To
do so, it is extremely
extremely simple:
• in the tree, select a test
select the Results tab
• on the right panel, sel
The panel shows a number of useful information:
• the campaign sessions that were already executed and that included that test
• the start and stop date of this campaign session
• the result of this test in the campaign session

Note: You
ou can reach one particular campaign session by just clicking on it.

5.4 Duration
uration of test execution
5.4.1 Estimated duration o
off a campaign
As we’ve seen before, a campaign is made of a list of test
test.. If a test has already been run several
times, Xstudio
tudio can provide an estimated duration based on previous executions as well as the
probability of exactness.

This information is available in the Details tab of the campaign:


5.4.2 Average duration of a testt
If a test has already been run several times, X
Xstudio
tudio can provide the average duration based on
previous executions.

This information is available in the Details tab of the test:


6. Launchers
XStudio can execute any kind of tests (scripts, native code code,, interpreted code, xml etc.). This
means that your already existing test suites are usable as they are. The only condition is to develop
(in Java)) a tiny component allowing XStudio
X to interact with the test. This piece of code is called
the Launcher.
Launcher

You may have different types of test implementation that require completely different approaches to
execute them. To each type of test (i.e. tests developed in different languages) corresponds a
“category”. For any category of tests you have, you’ll need a sp
specific
ecific Launcher.

Several launchers can live together. You just need to add all the necessary launchers (.jar files) in
the <Install_folder >/bin/launchers folder.
Install_folder>/bin/launchers

To get more information on how to develop your own launcher, read the “Developer’s guide”

6.1 Default launchers


By default, XStudio
X is provided with several ready--to-use launchers:

Launcher Description
manual.jar For manual testing (step
(step-by--step
step procedure)
simple_manual.jar For manual testing (all
(all-in-one
one procedure )
autoit.jar For autoit test sscripts
perl.jar For perl test scripts
tcl.jar For TCL test scripts
testcomplete7.jar For AutomatedQA® TestComplete 7 tests
testpartner.jar For Compuware® TestPartner tests
visualstudio.jar For Microsoft® VisualStudio Team System (Test Edition) tests
beanshell.jar For Beanshell test scripts
squishqt.jar For Froglogic® Squish for Qt tests
squishweb.jar For Froglogic® Squish for the web tests
junit3.jar For JUnit v3 tests
junit4.jar For JUnit v4 tests
pyunit.jar For PyUnit tests
nunit.jar For NUnit tests (so covering any .NET tests: C#, J#, C++/CLI,
C++/CLI Managed C++
C++,
VisualBasic.NET
VisualBasic.NET)
testng.jar For TestNG tests
marathon.jar For Jalian® Marathon tests
exe.jar For executable tests

random.jar Simulation launcher (returns random results)


success.jar Simulation
mulation launcher (returns only success results)

Table 9 – XStudio’s Default


Default Launchers
6.1.1 Manual llauncher
auncher (manual.jar)
The manual
anual testing launcher
launcher can be used as is to perform manual testing iiff your tests are currently
not automated
automated and consist in just following some procedures. The launcher will use the testplan as
entered in the system to request the user to perform some manual tasks and to answer some
questions.

6.1.1.1 Configuration
The manual.xml
manual file allows pre-configuring
pre configuring the launcher
launcher with some default values:

Parameter Description
General > set testcase as failed as soon as one If set to true, when one step fails in the
step fails testcase the complete testcase is set as
failed and the remaining steps are skipped.

i false
Default value is:
Timing > delay between testcases (ms
ms) Default value is: 0

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.1.2 Control Bar


The manual launcher includes a control bar allowing controlling the execution of the tests. The
control bar includes the following buttons:

• Go back
back to previous test:
• Go back to previous testcase:
• Restart current testcase:
• Pause current testcase:
• Resume current testcase:
• Go forward to next testcase:
• Go forward to next test:

6.1.1.3 Timeout
T per test
While executing manual tests,, the launcher provides instructions to the operator to execute actions
(and do verifications). The tests are configurable so that some timing restrictions can be added. For
instance, it is possible to give 10 minute minutess maximum for the operator to execute the
actions/verifications (i.e. you need to test that a message appears on the screen of the SUT in less
than 10 minutes; if the operator doesn't validate that he saw that message within the 10 minutes
then the test is automatically set as failed).

If you want to set some specific timeouts to a test you just need to associate to this test one or
several of the following attributes (with a value for the timeout):

• timeout test description (ms)


(ms):: maximum delay to validate the test description
• timeout test prerequisites (ms)
(ms):: maximum delay to validate the test prerequisites
• timeout testcases execution (ms)(ms):: maximum delay to execute any actions or verifications
implied by the testcases in this test

When manually executed, each test will be applied with their specific timeout values.

Note also, that all these attributes are dynamic.. This means that when creating the campaign
session, you can overwrite any of these default values (on each test if necessary) to customize one
campaign session execution:
6.1.1.4 Tutorial:
Tutorial: Creating and executing manual tests
In this tutorial we will learn how to setup from scratch a manual test suite, execute it and analyze
the results.

6.1.1.4.1 Declare the agent and the SUT


If it does not already exist, create an agent with the exact name (or the IP Address) of your PC. To
know the name of your PC, use the ipconfig /all command on Windows and ifconfig
command on Linux. This name/IP address must be ping-able
ping able from the network.

• in the Agents tree, select thee root folder


• on the right panel, click on the create folder button
• France Office”)
enter the name of the folder (i.e. ““France Office”) and submit
• France Office
in the tree, select the newly added folder (i.e. ““France Office”)
• on the right panel, click on the create agent button
• enter
ter the hostname or the static ip address of your local host and submit

Then create a SUT (System Under Test):

• in the SUTs tree, select the root folder


• on the right panel, click on the create company button
• Acme”) and
enter the name of the company (i.e. ““Acme nd submit
• Acme”)
in the tree, select the newly added company (i.e. ““Acme
• on the right panel, click on the create folder button
• enter the name of the folder (i.e. ““QA”)”) and submit
• in the tree, select the newly added folder (i.e. ““QA”)
• on the right panel, click on the create sut button
• My Browser
enter the name (i.e. ““My Browser”) ”) and the version (i.e. ““1.0”)
”) of the SUT and submit

6.1.1.4.2 Create a dedicated category for manual tests


We now need to ccreate
reate a category in the test tree that will gather all the manual tests.

• in the Tests tree, select the root folder


• on the right panel, click on the create category
ategory button
• a dialog box is displayed
• Manual tests
enter the name of the category (i.e. “Manual tests”).
• enter the description of the category. You can use the formatting tools ((wiki-style)
style) to format
the text. Later, in reports this text will appear correctly formatted.
• enter the launcher to be associated with that category: manual
manual.jar
• click on submit

6.1.1.4.3 Create a test


To create a test:
• in the tree, select the newly added categor Manual tests
category (i.e. “Manual tests”)
• on the right panel, click on the create folder button
My browser tests
• enter the name of the folder (i.e. ““My tests”)
”) and submit
My browser tests”)
• in the tree, select the newly added folder (i.e. ““My tests
• on the right panel, click on the create test button
• a dialog box including three tabs is displayed
• fill the Details tab with the name (i.e. ““basic test”) ”) and priority of the test (leave the
canonical path blank).
• fill the Testplan tab with the prerequisites and the general description of the test.test. You can
use the formatting tools (wiki-
(wiki-style)
style) to format the text. Later, in reports this text will appear
correctly formatted and submit
To create a testcase:
• in the tree, select a newly added test (i.e. ““basic test”)
• on the right panel, click on the create testcase button
• a dialog box including two tabs is displayed

• fill the Details tab with the index (defining the order in which the testcases will be executed
first test case
within a test), name (i.e. ““first case”)”) and general description of the
the testcase. You can
use the formatting tools (wiki-
(wiki-style)
style) to format the text. Later, in reports this text will appear
correctly formatted. Do not forget to check the Implemented checkbox. Non Non-implemented
implemented
test are NOT executed when running a campaign. In ca case
se of manual test, always check the
Implemented checkbox.
• select the Testplan tab and define all the steps and checks that will be needed in this
testcase:
o add a step:
step
 in the tree, select the root folder
 click on the create step button
 enter the descript
description
ion of the step and submit
o add parameters (opt)(opt):
 in the tree, select the parameters node
 click on the create parameter button
 enter the description of the parameter and submit
 repeat the operation if you need to specify more parameters
o add checks (opt.):
(opt.)
 tree, select the checks node
in the tree,
 click on one of the create boolean operators buttons
 click on the new operator and click on the create check button
 enter the description of the check and submit
 repeat the operation if you need to specify more che checks
cks (you can mix as
many different boolean operators as you want)
o repeat the operation for every step and submit
• click on submit
• immediately, the testcase appears in the sub
sub-tree
tree on the left panel

You can create other testcases and other tests using the ssame
ame procedure.

6.1.1.4.4 Run an individual test

• basic test”)
In the test tree, select the newly created test (i.e. “basic test (not the test case!)
case ) and
press the "start" button
• Check in the Agent tab of the pop up window that your PC is selected by default
• My brow
Select the ""My browser 1.0"" in the SUT tretree
• Select a pre
pre-existing
existing configuration
configuration in the Configuration tab
• If no configuration are available
available:
o click on the create configuration button
o a dialog box is displayed
o enter the name of the configuration
o fill in all the forms displaye
displayedd and submit
• Press the Submit button

Some popup dialog boxes will ask the operator to execute some operations and/or verify some
assertions
• When the test is finished, you can control the results in the campaign tree (a new test
campaign
campaign and campaign session have been automatically created in the campaign tree)
• In the content tab, you can select a test, then a test case (in the dynamic sub
sub-tree)
tree) and
check the complete log of the execution

6.1.1.4.5 Creating a test campaign


At this stage you
yo know how to individually run a manual test. But how to deal with a complete
testsuite? XStudio provides a very efficient and simple way of managing manual test campaign.
For this, two different elements must be distinguished: test campaign and campaign ssession:
ession:

• The test campaign defines just a list of tests to be executed


• A new campaign session will be created each time the campaign will be performed. It
adds to the test campaign notion, the target on which the tests will be performed and how.
The "how" is characterized by some dynamic configurable parameters.

To create a campaign:
• in the tree, select a folder (create one if necessary)
• on the right panel, click on the create campaign icon
• a dialog box including two tabs is displayed
• fill the Details tab with the name of the campaign
• in the Content tab, select all the tests you want to be part of this campaign
• click on submit
• immediately, the campaign appears in the tree

To create a campaign session:


• in the tree, select a campaign
• on the right panel, cli ck on the create campaign session button
click
• a dialog box including seven tabs is displayed
• fill the Details tab with the name of the session.
• (opt.) select the Test operator tab and pick the user who will run the session
• select the Agent tab and pick the ag agent
ent on which the session will be run. If you wish to run
the tests on the local host, leave the default selection (that should already match the local
host)
• select the SUT tab and pick the SUT on which the session will be run
• select the Configuration tab a and
nd pick the configuration you which for each category
involved in this campaign session. Once a campaign session is created, it is impossible (by
purpose) to change the configurations associated.
• (opt.) if needed, modify the dynamic attributes values in tthe Attributes tab
• (opt.) select the CC Emails tab and check some users to receive a notification when the
campaign session will be completed
• click on submit
• immediately, the campaign session appears in the tree in idle state

6.1.1.4.6 Run a campaign session


To run a campaign session:
• in the tree, select a campaign session
• on the right panel, click on the start button
• immediately, the Test campaign details screen appears and will display real time the
results of testing
6.1.2 Simple manual launcher (simple_manual.jar
(simple_manual.jar)
The simple
imple manual testing launcher differs from the Manual testing launcher by the fact that it will
not prompt the user for each step and check but instead will show the complete testcase procedure
and will just ask the user for a result. The setup/conf
setup/configuration
iguration is exactly the same.

Refer to the manual launcher section for the configuration and tutorial.

6.1.3 Autoit Launcher (autoit.jar)


The Autoit launcher allows interfacing with AutoIt (.au3) scripts.
It has been tested with AutoIt 3.

6.1.3.1 Configuration
The autoit.xml
xml file allows pre
pre-configuring
configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are located all the .au3 scripts.
This is a root path. Each test in XStudio has a canonical
path that wil
willl be appended to this
his path.
This path MUST not include an ending slashslash.

Default value is: C:/my_autoit_scripts


General > This must indicates the maximum time the system will wait
Asynchronous timeout (in seconds) for the test to complete.

Default value is: 600


AutoIt > AutoIt install path This must indicate where is installed AutoIt on the host.

Default value is: C:/tools/


C:/tools/autoit-3

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.3.2 Requirements
1) Each test in XStudio must have his dedicated .au3 script. The name of the script MUST be
equal to the name of the test.

2) The .au3 script must be able to parse the argument testcaseIndex passed during interpretation.
This allows the script to execute different routines depending on the testcase index.
The interpreter is executed by the launcher using this syntax:

AutoIt3.exe
utoIt3.exe <testRootPath>/<testPath>/<testName>.au3 /debug /testcaseIndex=<testcaseIndex>

3) When the .au3 has executed all its action,


action, it MUST create an empty test_completed.txt
test_completed.txt file.
Indeed, the
he executions of the autoit scripts are asynchronous. This
This mechanism allows the launcher
to know when the test is completed. A timeout of 10 minutes is predefined. If the .au3 script did not
create the test_completed.txt within tthe
he first 10 minutes, then the launcher considers
consider the test has
crashed
rashed and skip
skips it.

4) the .au3 script must generate a log.txt during its execution. This file MUST describe all the
actions performed by the test as well as the result of each action. This ffile
ile will be parsed by the
launcher and all the information will be passed/stored automatically in the XStudio database. The
log.txt MUST respect a specific format: Each line MUST include the strings ““[Success] [Success]",
"[Failure]"" or "[Log]"
" " or the line will not be treated. Based on this information, the testcase will be
flagged as passed or failed.

6.1.3.3 Tutorial:
Tutorial Creating and executing AutoIt tests
In this tutorial, we will learn to run some AutoIt test scripts.

6.1.3.3.1 Prerequisites
Install AutoIt in the folder C:\Program
Program files\AutoIt3
f AutoIt3

We will modify one of the example script provided with AutoIt.


Copy the file C:\Program
Program Files
Files\AutoIt3|Examples
AutoIt3|Examples\calculator.au3
calculator.au3 and rename it as
calculator_test
_test.au3.

Insert in the beginning of the file calculator_test.au3


calculator .au3 the following lines o
off code:

#include <File.au3>
$RESULT_FILENAME = "Log.txt"
FileDelete($RESULT_FILENAME)
$FLAG_FILENAME = "test_completed.txt"
FileDelete($FLAG_FILENAME)

And insert at the end of the file:

; log a success
logFileAndConsoleWrite($RESULT_FILENAME, "[Log] fir
first
st message")
logFileAndConsoleWrite($RESULT_FILENAME, "[Log] second message")
logFileAndConsoleWrite($RESULT_FILENAME, "[Success] testcase succeeded")
;logFileAndConsoleWrite($RESULT_FILENAME, "[failure] testcase failed")

; create the test_completed.txt f


file
FileOpen($FLAG_FILENAME, 1)

Func logFileAndConsoleWrite($logFile, $input)


ConsoleWrite($input & @CRLF)
_FileWriteLog($logFile, $input)
EndFunc

6.1.3.3.2 Create a dedicated


dedicated category for AutoIt tests and create a test
• create a category AutoIt associated to the launcher autoit.jar
• under
nder this category, create (somewhere tree) a test with name calculator
somewhere in the tree calculator_test and
with a canonical path set to /Examples
Examples.

6.1.3.3.3 Creating a test campaign


• create a campaign including only the test calculator_test
calculator_test

• create a campaign session specifying in the configuration:


o Test root path: C:/ Program Files /AutoIt3
o AutoIt install path: C:/
C:/Program
Program Files
Files/AutoIt3

6.1.3.3.4 Run a campaign session


Run the campaign session
6.1.4 Perl Launcher (perl.jar)
The Perl launcher allows interfacing with .perl scripts.
It has been tested with ActivePerl 5.8.9
5.8.9.

6.1.4.1 Configuration
The perl.xml
xml file allows pre
pre-configuring
configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are located all the .pl.p scripts.
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is: C:/my_


C:/my_perl_scripts
_scripts
General > This must indicat
indicates
es the maximum time the system will wait
Asynchronous timeout (in seconds) for the test to complete.

Default value is: 600


Perl > Perl install path This must indicate where is installed Perl on the host.

Default value is: C:/Perl/bin


Perl/bin

These values can be changed while creating the campaign session from XStudio.
campaign XStudio

6.1.4.2 Requirements
1) Each test in XStudio must have his dedicated .pl
. script. The name of the script MUST be equal
to the name of the test.

2) The .pl script must be able to parse the argument testcaseIndex passed during interpretation.
interpretat
This allows the script to execute different routines depending on the testcase index.
The interpreter is executed by the launcher using this syntax:

perl.exe
erl.exe <testRootPath>/<testPath>/<testName>.pl /debug /testcaseIndex=<testcaseIndex>

3) When the .pl has executed all its ac tions, it MUST create an empty test_completed.txt file.
actions,
Indeed, the
he executions of the Perl scripts are asynchronous. This mechanism allows the launcher
to know when the test is completed. A timeout of 10 minutes is predefined. If the .pl script did not
create the test_completed.txt
test_completed. within the first 10 minutes, then the launcher considers the test has
crashed and skip
skips it.

4) the .pl script must generate a log.txt during its execution. This file MUST describe all the actions
performed
med by the test as well as the result of each action. This file will be parsed by the launcher
and all the information will be passed/stored automatically in the XStudio database. The log.txt
MUST respect a specific format: Each line MUST include the strin [Success]", "[Failure]
strings “[Success] [Failure]" or
"[Log]"" or the line will not be treated. Based on this information, the testcase will be flagged as
passed or failed.
6.1.5 TCL Launcher (tcl.jar)
The TCL launcher allows interfacing with TCL (.tcl)
l) scripts.
It has been tested with TCL 8.5.

6.1.5.1 Configuration
The tcl.xml
xml file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are located all the .tcl scripts.
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is: C:/my_


C:/my_tcl_scripts
General > This must indicates the maximum time the system will wait
Asynchronous timeout (in seconds) for the test to complete.

Default value is: 600


Tcl > Tcl install path This must indicate where is installed Tcl on the host.

Default value is: C:/Tcl/bin


/bin

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.5.2 Requirements
1) Each test in XStudio must have his dedicated .tc
tcl script. The name of the script MUST be equal
to the name of the test.

2) The .tcl script must be able to parse the argument testcaseIndex passed during interpretation.
This allows the script to execute different routines depending on the testcase index.
The interpreter is executed by the launcher using this syntax:

tclsh85.exe
.exe <testRootPath>/<testPath>/<testName>.tcl
<testRootPath>/<testPath>/<testName>.tcl /debug /testcaseIndex=<testcaseIndex>

3) When the .tcl has executed all its actions,


actions, it MUST create an em empty test_completed.txt file.
Indeed, the
he executions of the TCL scripts are asynchronous. This mechanism allows the launcher
to know when the test is completed. A timeout of 10 minutes is predefined. If the .tcl script did not
create the test_completed.txt
test_completed. within the first 10 minutes, then the launcher considers the test has
crashed and skip
skips it.

4) the .tcl script must generate a log.txt during its execution. This file MUST describe all the
actions performed by the test as well as the result of each action
action.. This file will be parsed by the
launcher and all the information will be passed/stored automatically in the XStudio database. The
log.txt MUST respect a specific format: Each line MUST include the strings ““[Success] [Success]",
"[Failure]"" or "[Log]"
" " or the line will
will not be treated. Based on this information, the testcase will be
flagged as passed or failed.
6.1.6 AutomatedQA
AutomatedQA® TestComplete v7 Launcher
(testcomplete7.jar)
The AutomatedQA®
AutomatedQA TestComplete v7 launcher allows interfacing with TestComplete tests.

6.1.6.1 Configuration
Configuration
The testcomplete7.xml
testcomplete7 file allows pre
pre-configuring
configuring the launcher with some default values:

Parameter Description
TestComplete7 > TestComplete
Complete install This must indicate where are located all the
path TestComplete scripts. This is a root path. Each test in
XStudio has a canonical path that will be appended to
this path.
This path MUST not include an ending slash
slash.

Default value is: C:/Automation/Runner


Automation/Runner
TestComplete7 > TestComplete log
directory path Default value is: C:/Automation/Tests/TestProject/Log
Automation/Tests/TestProject/Log

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.6.2 Requirements
The tests are executed by the launcher using this syntax:

runConsoleGUI.cmd /u:<testPath> /routine:<testName> /tcIndex:<testcaseIndex>


/sutName:<SutName>

The test
st will be marked as passed or failed depending on the return code of the execution.
The TestComplete xml log is also attached to the testcase execution in XStudio.
6.1.7 Compuware
Compuware® TestPartner Launcher (testpartner.jar)
The Compuware®
Compuware TestPartner launcher al lows interfacing with TestPartner scripts.
allows

6.1.7.1 Configuration
The testpartner.xml
testpartner file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
TestPartner > Test root path This must indicate where are located all the test scripts. This is
a root path. Each test in XStudio has a canonical path that will
be appended to this path.
This path MUST include an ending slash
slash.

Default value is: C:/tests_repository/


tests_repository/
TestPartner > projectName Default value is: myProject
TestPartner > username
me Default value is: JohnDoe
TestPartner > password Default value is: myPassword
assword
TestPartner > dsn Default value is: dsn
TestPartner > detailedLogPath Default value is: C:/logs/log.xml
logs/log.xml

from XStudio.
These values can be changed while creating the campaign session fr XStudio

6.1.7.2 Requirements
The tests are executed by the launcher using this syntax:

C:\Program
Program Files\TestPartner
Files TestPartner\tc.exe --d <dsn> -u
u <username> --p
p <password> -rr <projectName> -s
<testRootPath>/<testName>
<testRootPath> <testName>

The test will be marked as passed or failed dependi


depending on the log file generated by TestPartner
TestPartner.
The xml file is parsed by the launcher. The xml log is also attached to the testcase execution in
XStudio.
6.1.8 Microsoft®
Microsoft® VisualStudio Launcher (visualstudio.jar)
The Microsoft®
Microsoft VisualStudio launcher allows inte
interfacing with VisualStudio scripts.

6.1.8.1 Configuration
The visualstudio.xml
visualstudio file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test root path This must in
indicate
dicate where are located the tests
tests.. This is a root
path. Each test in XStudio has a canonical path that will be
appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is: C:/my_test_repository


C:/my_test_repository
General > Library name Default value is: my_tests.dll
my_tests
VisualStudio > Runner path This must indicate where is the Visual Studio executable

Default value is:


C:/Progra
rogramm Files/Microsoft
Files Microsoft Visual Studio/
Studio/Common7/IDE/
ommon7/IDE/
MSTest.
MSTest.exe
VisualStudio > results file path This must indicate where is located the results text file

Default value is: C:/my_test_rep


C:/my_test_repository /testResults
testResults.trx
trx

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.8.2 Requirements
The tests are executed by the launcher using this syntax:

<runnerPath>
Path> /testcontainer:
/testcontainer:<libraryName>
Name> /test:<test
/test:<testPath>.<testName>
.<testName>
/resultsfile:<results
<resultsFilePath>
Path>

And this is executed from the working directory <testRootPath>

The test will be marked as passed or failed depending on the log file generated by VisualStudio.
The xml file is parsed by the launcher. The xml log and the execution ttrace
race of the command are
also attached to the testcase execution in XStudio.
6.1.9 Beanshell Launcher (beanshell.jar)
The Beanshell launcher allows interfacing with beanshell test scripts.
It is provided with an additional module that first download and insta
installll the product but you can
easily remove this part and recompile the launcher).
It has been tested with Beanshell 2.0
2.0b4

6.1.9.1 Configuration
The beanshell.xml
beanshell file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test
Test root path This must indicate where are located all the Beanshell
scripts. This is a root path. Each test in XStudio has a
canonical path that will be appended to this path.
This path MUST not include an ending slash
slash.

Default value is: C:/beanshell/clie


beanshell/client
Beanshell > Beanshell install path This must indicate where is the Beanshell jar file

Default value is: C:/tools/beanshell


C:/tools/beanshell-2.0b4/bsh
2.0b4/bsh-2.0b4.jar
2.0b4.jar
Installers > Folder containing the Default value is: C:/products
products
installers
Installers > Download server URLU Default value is: //NAS/Builds/trunk
Installers > Installer template name Default value is: product--setup-%version%.exe
%version%.exe

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.9.2 Requirements
The launcher starts by downloading a ssetup etup program and install the product. The product
<installerTemplateN
TemplateName
ame> (using the SUTVersion) is downloaded from <downloadServerURL> and
copied in <folderContainingtheInstallers>
folderContainingtheInstallers>
folderContainingtheInstallers>. Then the product is installed by running the executable
with the option /S (This is supposed that the installer is a NSIS installer).

1) Each test in XStudio must have his dedicated .bsh script. The name of the script MUST be
equal to the name of the test.

2) The .bsh
bsh script must be able to parse the testcaseIndex argument passed
passed during interpretation.
This allows the script to execute different routines depending on the testcase index.
The interpreter is executed by the launcher using this syntax:

The tests are executed by the launcher using this syntax:

java -cp <beanshellInstallPath>


<beanshellInstallPath> bsh.Interpreter <testRootPath>/<testPath>/<testName>.bsh
<testcaseIndex> <sutVersion>

And this is executed from the working directory <testRootPath>/<testPath>

3) When the .bsh has executed all its ac tions, it MUST create an empty test_completed.txt
actions, test_completed.txt file.
Indeed, the
he executions of the TCL scripts are asynchronous. This mechanism allows the launcher
to know
now when the test is completed.

4) the .bsh
bsh script must generate a log.txt during its execution. This file MUST describe all the
actions performed
formed by the test as well as the result of each action. This file will be parsed by the
launcher and all the information will be passed/stored automatically in the XStudio database. The
log.txt MUST respect a specific format: Each line MUST include the st [Success]",
strings “[Success]
"[Failure]"" or "[Log]"
" " or the line will not be treated. Based on this information, the testcase will be
flagged as passed or failed.
6.1.10 Froglogic
Froglogic® Squish Launcher (squish
(squish.jar)
.jar)
The Froglogic®
Froglogic Squish launcher allows interfacing with Squish tests.
It has been tested with Squish-3.4.4.
Squish

6.1.10.1 Configuration
The squish.xml
squish file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are located all the Squish tests.
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is: C:/


Squish > Squish install path This must indicate where is installed squish/
squish

Default value is: C:/tools/squish


C:/tools/squish-web_win32
web_win32
Squish > Squish lib
libraries
raries path This must indicate where are located the script commonly
used by all squish test. This parameter is optional.

Default value is: C:/src/s


C:/src/squish/lib
Advanced > snooze factor This must indicate the fa factor
ctor applied to all snooze calls
(hence affecting the “time to replay”)
replay”). The default value of
30% means that a snooze of 1 second in the script will
actually last 300 ms.

Default value is: 30 (%)


Advanced > Force kill squish If set to True,
True, the squish runner will be killed (if still alive)
squishrunner/_squishrunner by the launcher after the execution of each testcase. This
processes
ocesses after each testcase should not be used in normal conditions.

Default value is: False


Advanced > Force kill squish If set to True,
True, the squish server will be killed (if still alive)
squishserver/_squishserver
erver/_squishserver by the launcher after the execution of each test (including
processes after each test all testcases).
testcases) This should not be used in normal
conditions.

Default value is: False


Advanced > Force kill squish If set to True,True, the squish webhook processes will be killed
webhook/_webhook processes after (if still alive) by the launcher after the execution of each
each testcase (Squish for web) testcase. This should not be used in normal
normal conditions.

Default value is: False


Advanced > Force kill iexplore If set to True, True all the iexplore processes will be killed (if
process after each testcase (Squish still alive) by the launcher after the execution of each
for web) testcase. This should
should not be used in normal conditions.

Default value is: False


Advanced > Force kill ieuser If set to True, True all the ieuser processes will be killed (if still
process after each testcase (Squish alive) by the launcher after the execution of each testcase.
for web) This should not be used in normal conditions.

Default value is: False

These values can be changed while creating the campaign session from XStudio.
XStudio
6.1.10.2 Requirements
1) Each testcase
test in XStudio must have his dedicated squish script. The name of th
thee script MUST
be equal to tst_<testName>
<testName>.

The tests are executed


cuted by the launcher using the following syntax
syntaxes:

First of all the squish server is run:


<squishInstallPath>/bin/squishserver.exe

Then, the test are executed:


<squishInstallPath>/bin/squishrun
<squishInstallPath> /squishrunner.exe
ner.exe --testsuite
testsuite <testRootPath>/<testPath>
<testRootPath>/<testPath> --testcase
e
tst_<testName> --snoozeFactor
snoozeFactor <snoozeFactor> --reportgen
--reportgen xml,_tmp.xml

The launcher set an environment variable SQUISH_SCRIPT_DIR set to <squishLibrary


<squishLibraryPath> so
that all libraries necessary for the main
main script to be executed are in the path.

6.1.10.3 Tutorial:
Tutorial: Creating and executing Squish tests
In this tutorial, we will learn to run some Squish test scripts. We will use Squish for Java but this
can be applied for any other squish application (Java, Qt, Web etetc.)

6.1.10.3.1 Prerequisites
Install Squish for Java in the folder C:
C:\tools\squish
squish-3.4.4-java-win32
win32
Create a file utils.js in the folder C:\src
src\squish\lib
lib the following content:

function globalLog()
globalLog() {
test.log("Global Log example",
example", "Traces");
}

Using Squish IDE create a new test suite (using javascript language
language,, with no AUT application)
application
called suite_fake in C:\src
src\squish\testsuites
testsuites.

In this directory, edit shared/scripts/objects.js and add the following content:

function localLog()
Log() {
test.log("Log example"
example",
, "Some useful information");
test.warning("Warning example", "Some warnings to highlight");
}

Add to the testsuite, 2 tests:

tst_failure which contains the following code:

function main() {
source(findFile("scripts", "utils.js"));
source(findFile("scripts",
ndFile("scripts", "objects.js"));

();
globalLog();
localLog();

test.compare(1, 1);
test.compare(2, 2);
test.compare(3, 4);
}

tst_success which contains the following code:

function main() {
source(findFile("scripts", "utils.
"utils.js"));
js"));
source(findFile("scripts", "objects.js"));

();
globalLog();
localLog();
test.compare(1, 1);
test.compare(2, 2);
test.compare(3, 3);
}

After these operations, the suite.conf file should contain:

AUT =
CLASS =
CLASSPATH
SSPATH =
CWD =
ENVVARS = envvars
HOOK_SUB_PROCESSES = 1
LANGUAGE = JavaScript
NAMINGSCHEME = MULTIPROP
TEST_CASES = tst_
tst_success tst_failure
tst_
USE_WHITELIST = 1
WRAPPERS = Java

You have now some very basic squish tests


tests: one
ne returning always
always a success
cess and the other one a
failure.

6.1.10.3.2 Create a dedicated category for Squish tests and create two tests
test
• create a category Squish associated to the launcher squish
squish.jar
• name success and
under this category, create (somewhere in the tree) two tests with names
failure with a canonical path set to suite_fake.
suite_fake

6.1.10.3.3 Creating a test campaign


• create a campaign including the tests success and failure
• create a campaign session with the default configuration settings.

6.1.10.3.4 Run a campaign session


Run the campaign session
6.1.11 JUnit v3 Launcher (junit3.jar)
The Junit v3 launcher allows interfacing with JUnit v3 tests.
It has been tested with JUnit3.0

Refer to JUnit v4 for the details as these 2 launchers are very similar.

6.1.12 JUnit v4 Launcher ((junit4


junit4.jar)
The Junit v4 launcher
her allows interfacing with JUnit v4
v tests.
It has been tested with JUnit 4.7

6.1.12.1 Configuration
The junit4.xml
xml file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are loc ated all the Squish tests.
located
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is: C:/build/classes


build/classes
General > Additional classpath This must indicate potential additional jar classpath
necessary to test the SUT. This can contain several path
separated by ‘;’.

Default value is:


JUnit > Java install path This must indicate the path to the java install.

Default value is: C:/Program Files/Java/jdk1.6.0


Files/Java/jdk1.6.0_06
_06
JUnit > JUnit jar path This must indicate the path to the JUnit library.

Default value is: C:/junit


C:/junit-4.7/junit-4.7.jar
4.7.jar

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.12.2 Requirements
The tests are executed by the launcher using this syntax:

<javaInstallPath>/
Path>/bin/java.exe
bin/java.exe –classpath
classpath <junitJarPath
junitJarPath>;<additionalClassPath>;
;<additionalClassPath>;<testRootPath
;<additionalClassPath>; testRootPath>
org.junit.runner.JUnitCore <testPath>.<testName>

And this is executed from the working directory <testRootPath>

The test will be marked as passed or failed depending on the log file generated by JUnit. The text
file is parsed by the launcher. The log and the execution trace of the command are also attached to
the testcase execution in XStudio.

6.1.12.3 Tutorial: Creating and executing JUnit tests


TODO
6.1.13 PyUnit Launcher (pyunit.jar)
The PyUnit launcher allows interfacing with PyUnit tests.
It has been tested with PyUnit 1.4.1

6.1.13.1 Configuration
The pyunit..xml file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
Generall > Test root path This must indicate where are located all the PyUnit tests.
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is:


C:/Program
Program Files
Files/PyUnit/pyunit
/PyUnit/pyunit-1.4.1/examples
1.4.1/examples
PyUnit > Python install path This must indicate the path to Python install

Default value is:


C:/Python26
Python26
PyUnit > PyUnit install path This must indicate the path to PyUnit install

Default value is:


C:/Program
Program Files/PyUnit/pyunit-1.4.1
Files/PyUnit/ 1.4.1

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.13.2 Requirements
The tests are executed by the launcher using this syntax:

<pythonInstallPath>/python.exe <pyunitInstallPath>/unittest.py <testPath>.<testName>

And this is executed from the working directory <testRootPath>

The test will be marked as passed or failed depending on the log file generated by PyUnit. The text
file is parsed by the launcher. The log and the execution trace of the command are also attached to
the testcase execution in XStudio.

6.1.13.3 Tutorial: Creating and executing PyUnit tests


TODO
6.1.14 NUnit Launcher (nunit.jar)
The NUnit launcher allows interfacing with .NET tests (C#, J#, C++/CLI,
C++/CLI Managed C++,
C++
VisualBasic.NET .
VisualBasic.NET)
It has been tested with NUnit 2.5.2
2.

6.1.14.1 Configuration
The nunit.xml
xml file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are located all the Squish tests.
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is:


C:/Program
Program Files/NUnit 2.5.2/bin/net
2.5.2/bin/net-2.0/tests
General > Assembly dll Default value is: nunit.util.tests.dll
NUnit > .NET version alue is: 2.0
Default value
NUnit > NUnit console path Default value is: C:/Program
Program Files/NUnit 2.5.2

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.14.2 Requirements
The tests are executed by the launcher using this syntax:

<nunitConsolePath>/bin/net
<nunitConsolePath>/bin/net-<netVersion>/nunit
<netVersion>/nunit-console.exe
console.exe /run:<testPath>.<testName>
<testRootPath>/<assemblyName>

And this is executed from the working directory <testRootPath>

The test will be marked as passed or failed depending on the log file generated by NUnit. TThe text
file is parsed by the launcher. The log and the execution trace of the command are also attached to
the testcase execution in XStudio.

6.1.14.3 Tutorial: Creating and executing NUnit tests


TODO
6.1.15 TestNG Launcher (testng.jar)
The TestNG launcher allows interfacing
interfacing with TestNG tests.
It has been tested with TestNG 5.10

6.1.15.1 Configuration
The testng..xml file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are located all the TestNG
estNG tests.
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is: C:/build/classes


build/classes
General > Additional classpath This must indicate potential add additional
itional jar classpath
necessary to test the SUT. This can contain several path
separated by ‘;’.

Default value is:


TestNG > Java install path This must indicate the path to the java install.

Default value is: C:/Program


Program Files/Java/jdk1.6.0_06
TestNG > TestNG jar path This must indicate the path to the TestNG library.

Default value is: C:/testing


testing-5.10/testing
5.10/testing-5.10-jdk15.jar
jdk15.jar

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.15.2 Requirements
The tests are executed by the launcher
launcher using this syntax:

<javaInstallPath>/bin/java.exe –classpath
classpath <testNGJarPath>;<additionalClassPath>;<testRootPath>
org.testng.TestNG –testclass
testclass <testPath>.<testName>

And this is executed from the working directory <testRootPath>

The test will be marke


marked
d as passed or failed depending on the xml log file generated by TestNG.
The xml file is parsed by the launcher. The log
logs,
s, HTML reports and the execution trace of the
command are also attached to the testcase execution in XStudio.

6.1.15.3 Tutorial:
Tutorial: Creating and e
executing
xecuting TestNG tests
TODO
6.1.16 Jalian®
Jalian Marathon Launcher (marathon.jar)
The Jalian®® Marathon launcher allows interfacing with Marathon tests.
It has been tested with Marathon 1.2.1.1

6.1.16.1 Configuration
The marathon.xml
marathon file allows pre-configuring
pre configuring the launcher with
with some default values:

Parameter Description
General > Test root path This must indicate where are located all the Marathon tests.
This is a root path. Each test in XStudio has a canonical path
that will be appended to this
his path.
This path MUST not incinclude
lude an ending slash
slash.

Default value is: C:/marathon


marathon-1.2.1.1/examples
1.2.1.1/examples
Marathon > Marathon home This must indicate the install path of Marathon.

Default value is: C:/marathon


marathon-1.2.1.1
Marathon > Marathon classpath This must indicate the classpath used by Marathon to execute
the tests.

Default value is:


C:/marathon
C:/marathon-1.2.1.1/marathon.jar;
1.2.1.1/marathon.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/vldocking_2.1.1.jar;
1.2.1.1/Support/vldocking_2.1.1.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/rmi
1.2.1.1/Support/rmi
1.2.1.1/Support/rmi-lite.jar;
lite.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/forms
1.2.1.1/Support/forms
1.2.1.1/Support/forms-1.0.7/forms
1.0.7/forms-1.0.7.jar;
1.0.7.jar;
C:/marathon
C:/marathon-1.2.1.1/Supp
1.2.1.1/Supp
1.2.1.1/Support/jaccess-1.3/jaccess.jar;
1.3/jaccess.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/junit3.8.2/junit.jar;
1.2.1.1/Support/junit3.8.2/junit.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/jython
1.2.1.1/Support/jython
1.2.1.1/Support/jython-2.2/jython.jar;
2.2/jython.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/looks
1.2.1.1/Support/looks
1.2.1.1/Support/looks-2.0.4/looks
2.0.4/looks-2.0.4.jar;
2.0.4.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/jedit
1.2.1.1/Support/jedit
1.2.1.1/Support/jedit-textArea.jar;
textArea.jar;
C:/marathon
C:/marathon-1.2.1.1/S
1.2.1.1/Support/jline
upport/jline-0.9.93.jar;
0.9.93.jar;
C:/marathon
C:/marathon-1.2.1.1/Support/vldocking_2.1.5C.jar
1.2.1.1/Support/vldocking_2.1.5C.jar
Marathon > Java install path This must indicate the path to the java install.

Default value is: C:/Program


Program Files/Java/jdk1.6.0_06

ampaign session from XStudio.


These values can be changed while creating the ccampaign XStudio

6.1.16.2 Requirements
The tests are executed by the launcher using this syntax:

<javaInstallPath>/bin/java.exe –classpath
classpath <marathonClassPath> -
Dmarathon.home=”<marathonHome>” -Dpython.home=”<marathonHome>”
Dpython.home=”<marathonHome>” -classpath
<marathonClassPath>
<marathonClassPath> net.sourceforge.marathon.Main -batch -xml
xml
<testRootPath>/marathon_report.xml < <testRootPath
RootPath>

And this is executed from the working directory <testRootPath>

The test will be marked as passed or failed depending on the xml log file generated by Marathon.
The xml file is parsed by the launcher. The logs and the execution trace of the command are also
attached to the testcase execution in XStudio.

6.1.16.3 Tutorial: Creating and executing Marathon tests


TODO
6.1.17 Executable Launcher ((exe
exe.jar)
The Executable launcher
launcher allows interfacing with any executable.

6.1.17.1 Configuration
The exe.xml
xml file allows pre-configuring
pre configuring the launcher with some default values:

Parameter Description
General > Test root path This must indicate where are located all the .exe files.
This is a root path. Each test in XStudio has a canonical
path that will be appended to this
his path.
This path MUST not include an ending slash
slash.

Default value is: C:/my_


C:/my_executables
executables
General > Synchronous executable This must indicate where is installed AutoIt on the host
host.

Default value is: true


General > Asynchronous timeout (in This must indicates the maximum time the system will
seconds) wait for the test to complete.

Default value is: 600

These values can be changed while creating the campaign session from XStudio.
XStudio

6.1.17.2 Requirements
1) Each test in XStudio must have his dedicated .exe file. The name of the executable MUST be
equal to the name of the test.

2) The .exe
exe file must be able to parse the argument testcaseIndex passed during execution. This
allows executing ddifferent
ifferent routines depending on the testcase index.

The test is executed by the launcher using this syntax:


<testRootPath>/<testPath>/<testName>.exe /debug /testcaseIndex=<testcaseIndex>

3) In asynchronous mode, when the .exe has executed all its action, it MUST create an empty
test_completed.txt
test_completed. file. This mechanism allows the launcher to know when the test is completed.
A timeout is predefined for this. If the executable did not create the test_completed.txt
test_completed.txt file within
the timeout value then the launche
launcherr considers the test has crashed and skip
skips
s it.

4) In synchronous mode, the returned code is used to determine if the test passed or failed: a
returned code equals to 0 will be understood as a success, any other value will be interpreted as a
failure.

5) In asynchronous mode, the he executable must generate a log. log.txt during its execution. This file
MUST describe all the actions performed by the test as well as the result of each action. This file
will be parsed by the launcher and all the information will be passed/stored automatically in the
XStudio database. The log.txt MUST respect a specific format: Each line MUST include the
[Success]", "[Failure]
strings “[Success] [Failure]" or "[Log]
[Log]"" or the line will not be treated. Based on this information,
the testcase will be flagged as passed or failed.
6.1.18 Simulation Random Launcher (random.jar)
The random launcher is for demo purposes only. It basically simulates time
time-consuming
consuming operation
and generates random results.

6.1.19 Simulation Success Launcher (success.jar)


The success launcher iis
s for demo purposes only. It basically simulates time-consuming
time consuming operation
and generates only success results.

6.2 Custom Launchers


Let’s take a basic example: you have two different types of tests:
• native executables (you currently just have to run the execut
executables,
ables, logs are printed out on
the console, exit code gives the result of the test),
• java class files (these are java class file with a main entry point; all classes inherit from
the TestClass class. You usually run them by calling the java interpreter on each test using
.bat or .sh scripts).

With a very minimum effort, you can make all these tests (maybe several thousands of tests)
manageable by XStudio. To do this, you just have to develop 2 lau
launchers
nchers (10 lines of code each).

Test 1

Custom Test 2

Executable
xecutable executables
Test 3
Tests Launcher
Database Test 4

public CReturnStatus run


run(int testId,
,
Launcher Interface

String testPath,
testPath
String testName,
testName
testcaseIndex {
int testcaseIndex)
String[] command = new String[] {
{testPath
testPath + testName
testName};
run(testId
testId,
testPath
testPath, Runtime run = Runtime.getRuntime
Runtime.getRuntime();
();
testName
testName, run.exec(command);
run.exec
testcaseIndex
testcaseIndex) int exitValue = p.waitFor
p.waitFor();

XStudio int result = ((exitValue


exitValue == 0) ? RESULT_SUCCESS: RESULT_FAILURE);

return new CReturnStatus


CReturnStatus(result, null);
null
}

Test 1

Test 2
Class java class
Launcher Test 3 files
Test 4

public CReturnStatus run(


(int testId,
String testPath,
testPath
String testName,
testName
testcaseIndex {
int testcaseIndex)
Class class = Class.forName
Class.forName(testPath
testPath + testName);
);
Object instance = class.newInstance();
class.newInstance();

int javaResult = ((TestClass


TestClass)instance).main();
)instance).main();

int result = ((javaResult


javaResult == 0) ? RESULT_SUCCESS: RESULT_FAILURE);

return new CReturnStatus


CReturnStatus(result, null);
null
}

Figure 12 – Custom Launchers


These launchers will have the responsibility to:
• translate the abstract execution orders from XStudio to real operations (instantiate the test
and execute it),
results to XStudio.
• return result

Of course,
e, the launcher interface /API is wider than just a run() method but the run() method is the
interface/API
principal one.

I’m currently working on the Developer’s Guide that will explain in detail how to develop you own
launcher.
7. Memory Profiling
The application is regularly
regularly profiled to check any possible memory leak. This is to maintain a good
robustness of the application. Latest measurements showed the following results:

7.1 Browsing trees


These measurements were taken while stress testing XStudio by selecting any item in any tree
hundreds of times:

XStudio needs between 20 and 150 Mbytes to run smoothly (less than Firefox!).
Firefox!).

7.2 Running manual tests on XAgent


These measurements were taken while running several times some manual test campaign in a
specific agent. Memory profiling was done on XAgent.
XAgent

XAgent
Agent needs between 5 and 10 Mbytes to run smoothly.
Figures
Figure 1 - User Tree ................................
................................................................
................................................................
......................................................... 9
.........................................................
Figure 2 - Calendar
Calendar Monthly View ................................
................................................................
................................................................
................................ 12
..................................
Figure 3 - Calendar Half
Half--Yearly View ................................
................................................................
............................................................. 13
.............................................................
Figure 4 - SUT Tree ................................
................................................................
................................................................
....................................................... 14
.......................................................
Figure 5 - Agent Tree ................................
................................................................
................................................................
.....................................................
..................................................... 16
Figure 6 - Requirement Tree ................................................................
................................................................................................
..........................................................................
................................ 18
Figure 7 - Specification Tree ................................................................
................................................................................................
..........................................................................
................................ 21
Figure 8 - Project Tree ................................................................
................................................................................................
...................................................................................
................... 23
Figure 9 - Test Tree ................................
................................................................
................................................................
....................................................... 29
.......................................................
Figure 10 - Campaign Tree ................................
................................................................
................................
................................................................
............................................
............ 32
Figure 11 - Defect Tree ................................
................................................................
................................
................................................................
..................................................
.................. 39
Figure 12 – Custom Launchers ................................
................................................................
................................................................
................................
.................................... 107

Tables
Table
Table 1 - Entities ................................................................
................................................................................................
.............................................................. 7
..............................................................
Table 2 - Tree Buttons
Buttons................................
................................
................................................................
................................
................................................................
......................................................
...................... 7
Table 3 - Overlay Icons ................................
................................................................
................................
................................................................
....................................................
.................... 8
Table 4 - Requirements Status ................................................................
................................................................................................
................................
....................................... 20
Table 5 - Specification Status ................................................................
................................................................................................
.........................................................................
................................ 22
Table 6 - Sprint Status................................
Status................................
................................................................
................................
................................................................
....................................................
.................... 25
Table 7 - Defect Status................................................................
................................................................................................
...................................................................................
................... 40
Table 8 - Unique Identifier Templates ................................................................
............................................................................................
............................................................ 63
Table 9 – XStudio’s Default Launchers ................................................................
..........................................................................................
.......................................................... 77

Acronyms
Acronym Meaning
SUT System Under Test

A more complete glossary is available online at http://www.xqual.com

You might also like