SMS Module5 Verification and Validation Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Module – 5

Measures of performance and their estimation,


Output analysis for terminating simulations Continued..,
Output analysis for steady-state simulations.
Verification, Calibration And Validation:
Optimization: Model building,
Verification and validation,
Verification of simulation models,
Verification of simulation models,
Calibration and validation of models,
Optimization via Simulation.
Introduction

One of the most important and difficult tasks facing a model developer is the Verification
and validation of the simulation model.

It is the job of the model developer to work closely with the end users Throughout
the period (development and validation to reduce this skepticism
And to increase the credibility.

The goal of the validation process is twofold:

1: To produce a model that represents true system behavior closely enough for the model to be
used as a substitute for the actual system for the purpose of experimenting with system.

2: To increase an acceptable, level the credibility of the model ,so that the model will be used by
managers and other decision makers. |

The verification and validation process consists of the following components:-


1:Verification is concerned with building the model right. It is utilized in comparison of the
conceptual model to the computer representation that implements that conception. It asks the
questions: Is the model implemented correctly in the computer? Are the input parameters and
logical structure of the model correctly represented?

2: Validation is concerned with building the right model. It is utilized to determine that a model
is an accurate representation of the real system. It is usually achieved through the calibration of
the model
Model Building, Verification, and Validation
The first step in model building consists of observing the real system and the interactions
among its various components and collecting data on its behavior. Operators, technicians ,repair
and maintenance personnel, engineers, supervisors, and managers under certain aspects of the
system which may be unfamiliar to others. As model development proceeds, new questions may
arise, and the model developers will return, to this step of learning true system structure and
behavior.
The second step in model building is the construction of a conceptual model – a collection of
assumptions on the components and the structure of the system, plus hypotheses on the values of
model input parameters, illustrated by the following figure.
The third step is the translation of the operational model into a computer recognizable
form- the
computerized model
Verification of Simulation Models
The purpose of model verification is to assure that the conceptual model is reflected
accurately in the computerized representation.
The conceptual model quite often involves some degree of abstraction about system
operations, or some amount of simplification of actual operations.

Many common-sense suggestions can be given for use in the verification process:-

Have the computerized representation checked by someone other than its developer.
Make a flow diagram which includes each logically possible action a system can take
when an event occurs, and follow the model logic for each a for each action for each
event type.

Closely examine the model output for reasonableness under a variety of settings of Input
parameters.
Have the computerized representation print the input parameters at the end of
the
Simulation to be sure that these parameter values have not been changed inadvertently.
Make the computerized representation of self-documenting as possible.
If the computerized representation is animated, verify that what is seen in the animation
imitates the actual system.
The interactive run controller (IRC) or debugger is an essential component of Successful
simulation model building. Even the best of simulation analysts makes mistakes or
commits logical errors when building a model.
The IRC assists in finding and correcting those errors in the follow ways:
(a) The simulation can be monitored as it progresses.
(b) Attention can be focused on a particular line of logic or multiple lines of logic
that constitute a procedure or a particular entity.
(c) Values of selected model components can be observed. When the simulation
has paused, the current value or status of variables, attributes, queues, resources,
counters, etc., can be observed

(d) The simulation can be temporarily suspended, or paused, not only to view
information but also to reassign values or redirect entities.
• Example: A model of a complex network of queues consisting many service centers.
– Response time is the primary interest, however, it is important to collect and print
out many statistics in addition to response time.
• Two statistics that give a quick indication of model reasonableness are
current contents and total counts, for example:
– If the current content grows in a more or less linear fashion as the
simulation run time increases, it is likely that a queue is unstable
– If the total count for some subsystem is zero, indicates no items
entered that subsystem, a highly suspect occurrence
– If the total and current count are equal to one, can indicate that an
entity has captured a resource but never freed that resource.
• Compute certain long-run measures of performance, e.g. compute the
long-run server utilization and compare to simulation results

Calibration and Validation of Models (As an aid in the validation process or


Naylor finger approches):
Verification and validation although are conceptually distinct, usually are
conducted Simultaneously by the modeler.
Validation is the overall process of comparing the model and its behavior to the
real System and its behavior.
Calibration is the iterative process of comparing the model to the real system,
making adjustments to the model, comparing again and so on.
The following figure 2 shows the relationship of the model calibration to the overall
validation process.
The comparison of the model to reality is carried out by variety of test Test are subjective
and objective.
Subjective test usually involve people, who are knowledgeable about one or
more aspects of the system, making judgments about the model and its output.
Objective tests always require data on the system's behavior plus the
corresponding data produced by the model.
As an aid in the validation process, Naylor finger:
1. Build a model that has high face validity.
2. Validate model assumption.
3. Compare the model input-output transformation to cooresponding input-output
transformation for the real system.
1.FACE VALIDITY
The first goal of the simulation modeler is to construct a model that appears reasonable
on its face to model users and others who are knowledgeable about the real system being
simulated.
The users of a model should be involved in model construction from its conceptualization
to its implementation to ensure that a high degree of realism is built into the model
through reasonable assumptions regarding system structure, and reliable data.

Another advantage of user involvement is the increase in the models perceived validity or
credibility without which manager will not be willing to trust simulation results as the
basis for decision making.
Sensitivity analysis can also be used to check model's face validity.
The model user is asked if the model behaves in the expected way when one or more
input variables is changed.
Based on experience and observations on the real system the model user and model
builder would probably have some notion at least of the direction of change in model
output when an input variable is increased or decreased.
The model builder must attempt to choose the most critical input variables for testing if it
is too expensive or time consuming to: vary all input variables

2.Validation of Model Assumptions


Model assumptions fall into two general classes: structural assumptions and data
assumptions.
Structural assumptions involve questions of how the system operates and usually involve
simplification and abstractions of reality.

• Bank example: customer queueing and service facility in a bank.


– Structural assumptions, e.g., customer waiting in one line versus many lines,
served FCFS versus priority.
– Data assumptions, e.g., interarrival time of customers, service times for
commercial accounts.
• Verify data reliability with bank managers.
Test correlation and goodness of fit for data

The number of tellers may be fixed or variable. These structural assumptions should be
verified by actual observation during appropriate time periods together with discussions
with managers and tellers regarding bank policies and actual implementation of these
policies.

Data assumptions should be based on the collection of reliable data and correct statistical
analysis of the data.data were collected on:

1. Inter arrival times of customers during several 2-hour periods of peak


loading ("rush-hour" traffic)
2. Inter arrival times during a slack period
3. Service times for commercial accounts
4. Service times for personal accounts

Validation is not an either/or proposition—no model is ever totally


representative of the system under study. In addition, each revision of the
model, as in the Figure above involves some cost, time, and effort.
The procedure for analyzing input data consist of three steps:-
1: Identifying the appropriate probability distribution.
2: Estimating the parameters of the hypothesized distribution .
3: Validating the assumed statistical model by goodness – of – fit test such as the
chi
square test, KS test and by graphical methods

3.Validating Input-Output Transformation

In this phase of validation process the model is viewed as input –output transformation.
That is, the model accepts the values of input parameters and transforms these inputs
into output measure of performance. It is this correspondence that is being validated.
Instead of validating the model input-output transformation by predicting the future ,the
modeler may use past historical data which has been served for validation purposes, if
one set has been used to develop calibrate the model, its recommended that a separate
data test be used as final validation test.
Thus accurate “ prediction of the past” may replace prediction of the future for purpose
of validating the future.
A necessary condition for input-output transformation is that some version of the
system under study exists so that the system data under at least one set of input
condition can be collected to compare to model prediction.
If the system is in planning stage and no system operating data can be collected,
complete input-output validation is not possible.
Validation increases modeler’s confidence that the model of existing system is accurate.
Changes in the computerized representation of the system, ranging from
relatively minor to relatively major include :

1: Minor changes of single numerical parameters such as speed of the machine,


arrival
rate of the customer etc.
2: Minor changes of the form of a statistical distribution such as distribution of
service
time or a time to failure of a machine.
3: Major changes in the logical structure of a subsystem such as change in queue
discipline for waiting-line model, or a change in the scheduling rule for a job shop
model.
4: Major changes involving a different design for the new system such as computerized
inventory control system replacing a non computerized system .

If the change to the computerized representation of the system is minor such as in items
one or two these change can be carefully verified and output from new model can be
accepted with considerable confidence.

• Example: One drive-in window serviced by one teller, only one or two transactions are
allowed.
– Data collection: 90 customers during 11 am to 1 pm.
• Observed service times {Si, i = 1,2, …, 90}.
• Observed interarrival times {Ai, i = 1,2, …, 90}.
– Data analysis let to the conclusion that:
• Interarrival times: exponentially distributed with rate l = 45
• Service times: N(1.1, 0.22)

The Black Box [Bank Example: Validate I-O Transformation]
• A model was developed in close consultation with bank management and employees
• Model assumptions were validated
• Resulting model is now viewed as a “black box”:
Input Variables
Possion arrivals
l = 45/hr: X11, X12, …
Services times,
N(D2, 0.22): X21, X22, …
D1 = 1 (one teller)

D2 = 1.1 min
(mean service time)
D3 = 1 (one line)

Model
“black box”
f(X,D) = Y

Model Output Variables, Y


Primary interest:
Y1 = teller’s utilization
Y2 = average delay
Y3 = maximum line length
Secondary interest:
Y4 = observed arrival rate
Y5 = average service time
Y6 = sample std. dev. of
service times
Y7 = average length of time

Comparison with Real System Data

• Real system data are necessary for validation.


– System responses should have been collected during the same time period (from
11am to 1pm on the same Friday.)
• Compare the average delay from the model Y2 with the actual delay Z2:
– Average delay observed, Z2 = 4.3 minutes, consider this to be the true mean value
m0 = 4.3.
– When the model is run with generated random variates X1n and X2n, Y2 should be
close to Z2.
– Six statistically independent replications of the model, each of 2-hour duration,
are run.

Hypothesis Testing [Bank Example: Validate I-O Transformation]


• Compare the average delay from the model Y2 with the actual delay Z2 (continued):
– Null hypothesis testing: evaluate whether the simulation and the real system are
the same (w.r.t. output measures):
• If H0 is not rejected, then, there is no reason to consider the model invalid
• If H0 is rejected, the current version of the model is rejected, and the
modeler needs to improve the model
Confidence Interval Testing[Validate I-O Transformation]

• Confidence interval testing: evaluate whether the simulation and the real system are close
enough.
• If Y is the simulation output, and m = E(Y), the confidence interval (C.I.) for µ is:

• Validating the model:


Y t S/ n
 / 2 , n −1
– Suppose the C.I. does not contain µ0:
• If the best-case error is > e, model needs to be refined.
• If the worst-case error is  e, accept the model.
• If best-case error is  e, additional replications are necessary.
– Suppose the C.I. contains µ0:
• If either the best-case or worst-case error is > e, additional replications are
necessary.
• If the worst-case error is  e, accept the model.

Input-Output Validation: Using Historical Input Data


When using artificially generated data as input data the modeler expects the model
produce event patterns that are compatible with, but not identical to, the event patterns
that occurred in the real system during the period of data collection.

Thus,in the bank model, artificial input data {X\n, X2n, n = 1,2, , .} for inter arrival and service
times were generated and replicates of the output data Y2 were compared to what was
observed in the real system
An alternative to generating input data is to use the actual historical record, {An, Sn, n =
1,2,...}, to drive simulation model and then to compare model output to system data.
To implement this technique for the bank model, the data Ai, A2,..., S1 S2 would have to
be entered into the model into arrays, or stored on a file to be read as the need arose.
To conduct a validation test using historical input data, it is important that all input data
(An, Sn,...) and all the system response data, such as average delay(Z2), be collected
during the same time period.
Otherwise, comparison of model responses to system responses, such as the comparison
of average delay in the model (Y2) to that in the system (Z2), could be misleading.
responses (Y2 and 22) depend on the inputs (An and Sn) as well as on the structure of the
system, or model.
Implementation of this technique could be difficult for a large system because of the need
for simultaneous data collection of all input variables and those response variables of
primary interest.
Input-Output Validation: Using a Turing Test

In addition to statistical tests, or when no statistical test is readily applicable persons


knowledgeable about system behavior can be used to compare model output to system
output.

For example, suppose that five reports of system performance over five different days are
prepared, and simulation output are used to produce five "fake" reports. The 10 reports
should all be in exactly in the same format and should contain information of the type
that manager and engineer have previously seen on the system.
The ten reports are randomly shuffled and given to the engineers, who is asked to decide
which report are fake and which are real.
If engineer identifies substantial number of fake reports the model builder questions the
engineer and uses the information gained to improve the model.
If the engineer cannot distinguish between fake and real reports with any consistency, the
modeler will conclude that this test provides no evidence of model inadequacy .

This type of validation test is called as TURING TEST.

Optimization via simulation:

Optimization via simulation to refer to the problem of maximizing or minimizing the


expected performance of a discrete event, stochastic system that is represented by a
computer simulation model.
Optimization usually deals with problems with certainty, but in stochastic discrete-event
simulation the result of any simulation run is a random variable
let x1,x2,..xm be the m controllable design variable and Y(x1,x2,..xm)be the observed
simulation output performance on one run:
To optimize Y(x1,x2,..xm) with respect to x1,x2,..xm is to maximize or minimize the
mathematical expectation of performance. E[Y(x1,x2,..xm)]

You might also like