Majabague Assignment#1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

ECEN 30054:

FUNDAMENTALS OF
INSTRUMENTATION AND
CONTROL

Submitted by: Veronica Jane A. Majabague


BSECE 2-2
Submitted to: Engr. Meldanette Bayani
ASSIGNMENT #1
ANSWER THE FOLLOWING:

1. 3 Basic functions of an instrument

2. Fundamental units

3. What is international standard?

4. Primary standard

5. Secondary standard

6. Working standard

7. Define the difference between accuracy and


precision.

8. What is error in measurement?

9. What are the types of errors?


1. 3 Basic functions of an instrument

Based on the functions, there are three main groups of instruments. The largest
group has the indicating function. Next in line is the group of instruments which
have both indicating and or recording functions. The last group falls into a special
category and perform all the three functions, i.e., indicating, recording, and
controlling. (http://ecoursesonline.iasri.res.in/mod/resource/view.php?id=147070)

1.Indicating- This function includes supplying information concerning the


variable quantity under measurement. Several types of methods could be
employed in the instruments and systems for this purpose. Most of the time, this
information is obtained as the deflection of a pointer of a measuring instrument.

2.Recording-In many cases the instrument makes a written record, usually on


paper, of the value of the quantity under measurement against time or against
some other variable. This is a recording function performed by the instrument.
For example, a temperature indicator / recorder in the HTST pasteurizer gives
the instantaneous temperatures on a strip chart recorder.

3.Controlling-This is one of the most important functions, especially in the food


processing industries where the processing operations are required to be
precisely controlled. In this case, the information is used by the instrument or the
systems to control the original measured variable or quantity.

2. Fundamental units

The fundamental units are the units of the fundamental quantities, as defined by
the International System of Units. They are not dependent upon any other units,
and all other units are derived from them. In the International System of Units,
the fundamental units are:

1. The meter (symbol: m), used to measure length.

2. The kilogram (symbol: kg), used to measure mass.

3. The second (symbol: s), used to measure time.

4. The ampere (symbol: A), used to measure electric current.

5. The kelvin (symbol: K), used to measure temperature.

1
6. The mole (symbol: mol), used to measure amount of substance or
particles in matter.

7. The candela (symbol: cd), used to measure light intensity.

Fundamental units are not related to each other


(https://en.wikiversity.org/wiki/Fundamental_units)

History of the SI System

The SI units of measurement have an interesting history. Over time they have
been refined for clarity and simplicity.
(https://courses.lumenlearning.com/boundless-chemistry/chapter/units-of-
measurement/)

 The meter (m), or metre, was originally defined as 1/10,000,000 of the


distance from the Earth’s equator to the North Pole measured on the
circumference through Paris. In modern terms, it is defined as the distance
traveled by light in a vacuum over a time interval of 1/299,792,458 of a
second.

 The kilogram (kg) was originally defined as the mass of a liter (i.e., of one
thousandth of a cubic meter). It is currently defined as the mass of a
platinum-iridium kilogram sample maintained by the Bureau International
des Poids et Mesures in Sevres, France.

 The second (s) was originally based on a “standard day” of 24 hours, with
each hour divided in 60 minutes and each minute divided in 60 seconds.
However, we now know that a complete rotation of the Earth actually
takes 23 hours, 56 minutes, and 4.1 seconds. Therefore, a second is now
defined as the duration of 9,192,631,770 periods of the radiation
corresponding to the transition between the two hyperfine levels of the
ground state of the cesium-133 atom.

2
 The ampere (A) is a measure of the amount of electric charge passing a
point in an electric circuit per unit time. 6.241×10 18 electrons, or one
coulomb, per second constitutes one ampere.

 The kelvin (K) is the unit of the thermodynamic temperature scale. This
scale starts at 0 K. The incremental size of the kelvin is the same as that
of the degree on the Celsius (also called centigrade) scale. The kelvin is
the fraction 1/273.16 of the thermodynamic temperature of the triple point
of water (exactly 0.01 °C, or 32.018 °F).

 The mole (mol) is a number that relates molecular or atomic mass to a


constant number of particles. It is defined as the amount of a substance
that contains as many elementary entities as there are atoms in 0.012 kg
of carbon-12.

 The candela (cd) was so named to refer to “candlepower” back in the days
when candles were the most common source of illumination (because so
many people used candles, their properties were standardized). Now, with
the prevalence of incandescent and fluorescent light sources, the candela
is defined as the luminous intensity in a given direction of a source that
emits monochromatic radiation of frequency 540⋅1012540⋅1012 Hertz and
that has a radiant intensity in that direction of 1/683 watts per steradian.

SI base units

Dimensi
Symb Post-2019 formal Historical origin /
Name Measure on
ol definition[1] justification
symbol

secon s time "The second, symbol The day is divided in 24 T


d s, is the SI unit hours, each hour divided in
of time. It is defined 60 minutes, each minute
by taking the fixed divided in 60 seconds.
numerical value of A second is 1 / (24 × 60 ×
the caesium 60) of the day. Historically
frequency ∆νCs, this day was defined as
the unperturbed the mean solar day; i.e.,
ground-state the average time between

3
hyperfine transition
frequency of the
caesium 133 atom, to two successive
be 9192631770 when occurrences of local
expressed in the unit apparent solar noon.
Hz, which is equal to
s−1."[1]

"The metre, symbol


m, is the SI unit
of length. It is defined
by taking the fixed
1 / 10000000 of the
numerical value of
distance from the Earth's
the speed of light in
metre m length equator to the North Pole L
vacuum c to
measured on the median
be 299792458 when
arc through Paris.
expressed in the
unit m s−1, where the
second is defined in
terms of ∆νCs."[1]

"The kilogram, symbol


kg, is the SI unit
of mass. It is defined
by taking the fixed
numerical value of
The mass of
the Planck
one litre of water at the
constant h to
kilogra temperature of melting
kg mass be 6.62607015×10−34  M
m ice. A litre is one
when expressed in
thousandth of a cubic
the unit J s, which is
metre.
equal to kg m2 s−1,
where the metre and
the second are
defined in terms
of c and ∆νCs."[1]

amper A electric "The ampere, symbol The original "International I


e current A, is the SI unit Ampere" was defined
of electric current. It electrochemically as the
is defined by taking current required to deposit
the fixed numerical 1.118 milligrams of silver
value of per second from a solution
the elementary of silver nitrate. Compared

4
to the SI ampere, the
difference is 0.015%.
However, the most recent
pre-2019 definition was:
"The ampere is that
constant current which, if
maintained in two straight
parallel conductors of
charge e to infinite length, of negligible
be 1.602176634×10−19  circular cross-section, and
when expressed in placed one metre apart in
the unit C, which is vacuum, would produce
equal to A s, where between these conductors
the second is defined a force equal
in terms of ∆νCs."[1] to 2×10−7 newtons per
metre of length." This had
the effect of defining
the vacuum
permeability to be

μ0 = 4π×10−7 H/m or N/A2 o
r T⋅m/A or Wb/(A⋅m)
or V⋅s/(A⋅m)

"The kelvin, symbol K,


is the SI unit
of thermodynamic
temperature. It is
defined by taking the
fixed numerical value The Celsius scale: the
thermodyna of the Boltzmann Kelvin scale uses the
mic constant k to degree Celsius for its unit
kelvin K Θ
temperatur be 1.380649×10−23 wh increment, but is a
e en expressed in the thermodynamic scale (0 K
unit J K−1, which is is absolute zero).
equal to kg m2 s−2 K−1,
where the kilogram,
metre and second are
defined in terms
of h, c and ∆νCs."[1]

mole mol amount of "The mole, symbol Atomic N


substance mol, is the SI unit weight or molecular
of amount of weight divided by
substance. One mole the molar mass constant,

5
contains exactly 6.022
140 76 ×
1023 elementary
entities. This number
is the fixed numerical
value of the Avogadro
constant, NA, when
expressed in the unit
mol−1 and is called
the Avogadro
number. The amount
1 g/mol.
of substance,
symbol n, of a system
is a measure of the
number of specified
elementary entities.
An elementary entity
may be an atom, a
molecule, an ion, an
electron, any other
particle or specified
group of particles."[1]

candel cd luminous "The candela, symbol The candlepower, which is J


a intensity cd, is the SI unit based on the light emitted
of luminous from a burning candle of
intensity in a given standard properties.
direction. It is defined
by taking the fixed
numerical value of
the luminous
efficacy of
monochromatic
radiation of
frequency 540×1012 H
z, Kcd, to be 683 when
expressed in the
unit lm W−1, which is
equal to cd sr W−1,
or cd sr kg−1 m−2 s3,
where the kilogram,
metre and second are
defined in terms

6
of h, c and ∆νCs."[1]

3. What is international standard?

- A standard recognized by an international agreement to serve internationally as


the basis for assigning values to other standards of the quantity concerned.

-International standards are technical standards developed by


international standards organizations. International standards are available for
consideration and use worldwide. The most prominent such organization is
the International Organization for Standardization (ISO). Other prominent
international standards organizations include the International
Telecommunication Union (ITU) and the International Electrotechnical
Commission (IEC). Together these three organizations have formed the World
Standards Cooperation alliance.

International standards may be used either by direct application or by a process


of modifying an international standard to suit local conditions. The adoption of
international standards results in the creation of equivalent, national standards
that are substantially the same as international standards in technical content,
but may have (i) editorial differences as to appearance, use of symbols and
measurement units, substitution of a point for a comma as the decimal marker,
and (ii) differences resulting from conflicts in governmental regulations or
industry-specific requirements caused by fundamental climatic, geographical,
technological, or infrastructural factors, or the stringency of safety requirements
that a given standard authority considers appropriate.

International standards are one way of overcoming technical barriers in


international commerce caused by differences among technical regulations and
standards developed independently and separately by each nation, national
standards organization, or company. Technical barriers arise when different
groups come together, each with a large user base, doing some well-established
thing that between them is mutually incompatible. Establishing international
standards is one way of preventing or overcoming this problem.

(https://en.wikipedia.org/wiki/International_standard)

7
4. Primary standard

A standard that is designated or widely acknowledged as having the highest


metrological qualities and whose value is accepted without reference to other
standards of the same quantity. NOTE: The concept of primary standard is
equally valid for base quantities and derived quantities.

(http://www.autex.spb.su/download/wavelet/books/sensor/CH05.PDF)

1. An example of a primary standard was the international prototype of the


kilogram (IPK) which was the master kilogram and the primary mass
standard for the International System of Units (SI). The IPK is a one
kilogram mass of a platinum-iridium alloy maintained by the International
Bureau of Weights and Measures (BIPM) in Sèvres, France.

2. Another example is the unit of electrical potential, the volt. Formerly it was


defined in terms of standard cell electrochemical batteries, which limited
the stability and precision of the definition. Currently the volt is defined in
terms of the output of a Josephson junction,[3] which bears a direct
relationship to fundamental physical constants.

(https://en.wikipedia.org/wiki/Standard_(metrology))

5. Secondary standard

A standard whose value is assigned by comparison with a primary standard of


the same quantity.
(http://www.autex.spb.su/download/wavelet/books/sensor/CH05.PDF)

Secondary reference standards are very close approximations of primary


reference standards. For example, major national measuring laboratories such
as the US's National Institute of Standards and Technology (NIST) will hold
several "national standard" kilograms, which are periodically calibrated against
the IPK and each other.(https://en.wikipedia.org/wiki/Standard_(metrology))

6. Working standard

A standard that is used routinely to calibrated or check material measures,


measuring instruments or reference materials. NOTES: 1. A working standard is
usually calibrated against a reference standard. 2. A working standard used

8
routinely to ensure that a measurement is being carried out correctly is called a
check standard.
(http://www.autex.spb.su/download/wavelet/books/sensor/CH05.PDF)

A machine shop will have physical working standards (gauge blocks for example)
that are used for checking its measuring instruments. Working standards and
certified reference materials used in commerce and industry have a traceable
relationship to the secondary and primary standards.

Working standards are expected to deteriorate and are no longer considered


traceable to a national standard after a time or use count expires.
(https://en.wikipedia.org/wiki/Standard_(metrology)

7. Define the difference between accuracy and precision.

Accuracy is the degree of closeness to true value. Precision is the degree to


which an instrument or process will repeat the same value. In other words,
accuracy is the degree of veracity while precision is the degree of reproducibility.
(https://www.forecast.app/faqs/what-is-the-difference-between-accuracy-and-
precision)

Both accuracy and precision reflect how close a measurement is to an actual


value, but they are not the same. Accuracy reflects how close a measurement is
to a known or accepted value, while precision reflects how reproducible
measurements are, even if they are far from the accepted value. Measurements
that are both precise and accurate are repeatable and very close to true values.

The example of a darts board is often used when talking about the difference
between accuracy and precision.

Accurately hitting the target means you are close to the center of the target, even
if all the marks are on different sides of the center. Precisely hitting a target
means all the hits are closely spaced, even if they are very far from the center of
the target. (https://www.precisa.co.uk/difference-between-accuracy-and-
precision-measurements/)

In a set of measurements, accuracy is closeness of the measurements to a


specific value, while precision is the closeness of the measurements to each
other.

9
Accuracy has two definitions:

1. More commonly, it is a description of systematic errors, a measure


of statistical bias; low accuracy causes a difference between a result and
a "true" value. ISO calls this trueness.

2. Alternatively, ISO defines[1] accuracy as describing a combination of both


types of observational error above (random and systematic), so high
accuracy requires both high precision and high trueness.

Precision is a description of random errors, a measure of statistical variability.

In simpler terms, given a set of data points from repeated measurements of the
same quantity, the set can be said to be accurate if their average is close to
the true value of the quantity being measured, while the set can be said to
be precise if the values are close to each other. In the first, more common
definition of "accuracy" above, the two concepts are independent of each other,
so a particular set of data can be said to be either accurate, or precise, or both,
or neither. (https://en.wikipedia.org/wiki/Accuracy_and_precision)

8. What is error in measurement?

1. Measurement Error (also called Observational Error) is the difference


between a measured quantity and its true value. It includes random
error (naturally occurring errors that are to be expected with any
experiment) and systematic error (caused by a mis-calibrated instrument
that affects all measurements). For example, let’s say you were measuring
the weights of 100 marathon athletes. The scale you use is one pound off:
this is a systematic error that will result in all athletes body weight
calculations to be off by a pound. On the other hand, let’s say your scale
was accurate. Some athletes might be more dehydrated than others.
Some might have wetter (and therefore heavier) clothing or a 2 oz. candy
bar in a pocket. These are random errors and are to be expected. In fact,
all collected samples will have random errors — they are, for the most
part, unavoidable. Measurement errors can quickly grow in size when
used in formulas. For example, if you’re using a small error in
a velocity measurement to calculate kinetic energy, your errors can easily
quadruple. To account for this, you should use a formula for error

10
propagation whenever you use uncertain measures in an experiment to
calculate something else. (https://www.statisticshowto.com/measurement-
error/)

2. The measurement error is defined as the difference between the true or


actual value and the measured value. The true value is the average of the
infinite number of measurements, and the measured value is the precise
value. https://circuitglobe.com/measurement-error.html

9. What are the types of errors?

Random errors are statistical fluctuations (in either direction) in the measured


data due to the precision limitations of the measurement device. Random errors
can be evaluated through statistical analysis and can be reduced by averaging
over many observations (see standard error).

Systematic errors are reproducible inaccuracies that are consistently in the


same direction. These errors are difficult to detect and cannot be analyzed
statistically. If a systematic error is identified when calibrating against a standard,
applying a correction or correction factor to compensate for the effect can reduce
the bias. Unlike random errors, systematic errors cannot be detected or reduced
by increasing the number of observations.
(https://www.webassign.net/question_assets/unccolphysmechl1/measurements/
manual.html)

In other research, Errors are:

1. Absolute Error: the amount of error in your measurement. For example, if


you step on a scale and it says 150 pounds, but you know your true
weight is 145 pounds, then the scale has an absolute error of 150 lbs –
145 lbs = 5 lbs.

2. Greatest Possible Error: defined as one half of the measuring unit. For


example, if you use a ruler that measures in whole yards (i.e., without any
fractions), then the greatest possible error is one half yard.

3. Instrument Error: error caused by an inaccurate instrument (like a scale


that is off or a poorly worded questionnaire).

11
4. Margin of Error: an amount above and below your measurement. For
example, you might say that the average baby weighs 8 pounds with a
margin of error of 2 pounds (± 2 lbs.).

5. Measurement Location Error: caused by an instrument being placed


somewhere it shouldn’t, like a thermometer left out in the full sun.

6. Operator Error: human factors that cause error, like reading a scale
incorrectly.

7. Percent Error: another way of expressing measurement error. Defined


as:

8. Relative Error: the ratio of the absolute error to the accepted


measurement. As a formula, that’s:

https://www.statisticshowto.com/measurement-error/

The error may arise from the different source and are usually classified into the
following types. These types are

3. Gross Errors

4. Systematic Errors

5. Random Errors

Their types are explained below in details.

1. Gross Errors

The gross error occurs because of the human mistakes. For examples consider
the person using the instruments takes the wrong reading, or they can record the

12
incorrect data. Such type of error comes under the gross error. The gross error
can only be avoided by taking the reading carefully.

For example – The experimenter reads the 31.5ºC reading while the actual
reading is 21.5Cº. This happens because of the oversights. The experimenter
takes the wrong reading and because of which the error occurs in the
measurement.

Such type of error is very common in the measurement. The complete


elimination of such type of error is not possible. Some of the gross error easily
detected by the experimenter but some of them are difficult to find. Two methods
can remove the gross error.

Two methods can remove the gross error. These methods are

 the reading should be taken very carefully.

 Two or more readings should be taken of the measurement quantity. The


readings are taken by the different experimenter and at a different point for
removing the error.

2. Systematic Errors

6. The systematic errors are mainly classified into three categories.

1. Instrumental Errors

2. Environmental Errors

3. Observational Errors

2 (i) Instrumental Errors

7. These errors mainly arise due to the three main reasons.

8. (a) Inherent Shortcomings of Instruments – Such types of errors are


inbuilt in instruments because of their mechanical structure. They may be
due to manufacturing, calibration or operation of the device. These errors
may cause the error to read too low or too high.

13
9. For example – If the instrument uses the weak spring then it gives the high
value of measuring quantity. The error occurs in the instrument because of
the friction or hysteresis loss.

10. (b) Misuse of Instrument – The error occurs in the instrument because of
the fault of the operator. A good instrument used in an unintelligent way
may give an enormous result.

11. For example – the misuse of the instrument may cause the failure to
adjust the zero of instruments, poor initial adjustment, using lead to too
high resistance. These improper practices may not cause permanent
damage to the instrument, but all the same, they cause errors.

12. (c) Loading Effect  – It is the most common type of error which is caused
by the instrument in measurement work. For example, when the voltmeter
is connected to the high resistance circuit it gives a misleading reading,
and when it is connected to the low resistance circuit, it gives the
dependable reading. This means the voltmeter has a loading effect on the
circuit.

13. The error caused by the loading effect can be overcome by using the
meters intelligently. For example, when measuring a low resistance by the
ammeter-voltmeter method, a voltmeter having a very high value of
resistance should be used.

2 (ii) Environmental Errors

14. These errors are due to the external condition of the measuring devices.
Such types of errors mainly occur due to the effect of temperature,
pressure, humidity, dust, vibration or because of the magnetic or
electrostatic field. The corrective measures employed to eliminate or to
reduce these undesirable effects are

 The arrangement should be made to keep the conditions as constant as


possible.

 Using the equipment which is free from these effects.

 By using the techniques which eliminate the effect of these disturbances.

14
 By applying the computed corrections.

2 (iii) Observational Errors

15. Such types of errors are due to the wrong observation of the reading.
There are many sources of observational error. For example, the pointer
of a voltmeter resets slightly above the surface of the scale. Thus an
error occurs (because of parallax) unless the line of vision of the observer
is exactly above the pointer. To minimise the parallax error highly accurate
meters are provided with mirrored scales.

3. Random Errors

16. The error which is caused by the sudden change in the atmospheric
condition, such type of error is called random error. These types of error
remain even after the removal of the systematic error. Hence such type of
error is also called residual error. (https://circuitglobe.com/measurement-
error.html)

15

You might also like