University of Mines and Technology Tarkwa: Marking Scheme For Class Assignment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

UNIVERSITY OF MINES AND TECHNOLOGY

TARKWA
Marking Scheme for Class Assignment

PROCESS INSTRUMENTATION (MR 373)


1. Explain what is meant by:
(a) active instruments Quantity being measured modulates the magnitude of some external
power source
(b) passive instruments - Instrument output is entirely produced by the quantity being measured.
2. Give examples and discuss the relative merits of these two classes of instruments.
Examples of passive instruments are; Manometers, Bourdon Pressure gauge etc.
Examples of active instruments are; Liquid level indicator, flow indicator etc.
Relative merits:
Active instruments have better measurement resolution (but limited by heating effect due to
power source and also by safety considerations small voltage is necessary) whiles passive
instruments are simpler to construct, no power supply needed and limited measurement
resolution.
3. What are the differences between analog and digital instruments? An analog instrument
gives an output that varies continuously as the quantity being measured changes. (They present
information about the measured variable in the form of continuous variation with respect to
time) whiles a digital instrument has an output that varies in discrete steps and so can only have
a finite number of values.
What advantages do digital instruments have over analog ones?
• As digital instrument indicates readings directly in decimal numbers and therefore errors
on account of human factor like parallax error and approximation are eliminated.
• Outputs of digital instruments are in digital form and therefore the output may be directly
fed into memory devices.
4. Explain the difference between static and dynamic characteristics of measuring
instruments.
Static characteristics are the set of criteria that are used for comparing the performance of
instruments when measuring a quantity, parameter or condition that is constant or changes
slowly with time (Steady State Situations) whiles dynamic characteristics are the set of criteria

1
that are used for comparing the performance of instruments when measuring a quantity,
parameter or condition that rapidly varies or changes with time.

5. Briefly define and explain all the static characteristics of measuring instruments.
I. Accuracy; describes how close a measurement of a quantity, parameter or condition of a
process variable approaches the true value of the process variable.
II. Static Error; is the difference between the measured value and the true value of the
process variable (under static condition).
III. Precision; is the ability of an instrument to reproduce a set of readings within a given
accuracy of a process variable.
IV. Drift; is the change in the indicated reading of an instrument over time when the value of
the measuring quantity remains constant.
V. Sensitivity; is the ratio of change in the output (response) of an instrument to a change
of its input.
VI. Dead Zone; largest range of values of a measured variable to which the instrument does
not respond.
6. How the accuracy of an instrument is usually defined? - The accuracy of an instrument is a
measure of how close the output reading of the instrument is to the correct value.
What is the difference between accuracy and precision?
Accuracy is how close a measurement of a quantity to the true value whiles Precision is the ability
of an instrument to reproduce a set of readings within a given accuracy of a process variable. High
precision does not imply accuracy. A high-precision instrument may have a low accuracy. Low
accuracy measurements from a high-precision instrument are normally caused by a bias in the
measurements, which is removable by recalibration
7. A manganin-wire pressure sensor has a measurement range of 0-20,000 bar and a quoted
inaccuracy of ±1% of full-scale deflection. What is the maximum measurement error when
the instrument is reading a pressure of 15,000 bar?
Solution
The maximum error expected in any measurement reading is 1.0% of the full-scale reading,
which is 20,000 bar for this particular instrument. Hence, the maximum likely error is 1.0% * 20,
000 bar = 200 bar. The maximum measurement error is a constant value related to the full-scale
reading of the instrument, irrespective of the magnitude of the quantity that the instrument is
actually measuring. In this case, as worked out above, the magnitude of the error is 200 bar. Thus,

2
when measuring a pressure of 15000 bar, the maximum possible error of 200 bar is 1.33% of the
measurement value.

Date of Submission: 09/02/2021


E. Abotar

You might also like