Intro To SE Lecture Notes 1 (Chapters 1-3)
Intro To SE Lecture Notes 1 (Chapters 1-3)
A relatively compact type of computer, the most common of all, easily outsells all
other types of computers annually for use in business and at home.
Different types of Microcomputers:
Desktop Computers, Notebook Computers/Laptop Computers, Tablet PCs, Personal Digital
Assistants, Palm PCs
b. Midrange(Mini) computers
A computer uses to interconnect people and large sets of information. More powerful
than a microcomputer, the minicomputer is usually dedicated to performing specific
functions.
c. Mainframes
d. Supercomputers
The most powerful of all computers, supercomputers were designed to solve problems
consisting of long and difficult calculations.
1.2 The Computer System
Objectives: What a computer system is and how it works to process data
The computer hardware is an electronic device which has the potential of performing
the task of solving a problem. However one has to give precise instructions to the
hardware in order to solve problem.
The finite set of instructions (steps) that the computer follow to perform a given job is
called a program.
Any program to be executed first it should reside / loaded/ in the memory.
Introduction To Computing and Software Engineering 10 Compiled
by: Tesfaye M
Software:- is a collection of programs and routines that support the operations of
performing a task using a computer. Software also includes documentations, rules
and operational procedures. Software makes the interface between the user and the
electronic components of the computer.
Computer software is classified into two
1. System software
2. Application software
3.2. 1.SYSTEM SOFTWARE
� Constitutes those programs which facilitates the work of the computer hardware.
� It organizes and manages the machine’s resources handles the input/output devices.
� It controls the hardware by performing functions that users shouldn’t have to or are
unable to handle.
� System programs make complex hardware more users friendly.
� It acts as intermediate between the user and the hardware.
�IIt enables the computer understand programming languages i.e. it serves as means
of communication between user and a computer.
The important categories of system software are:
a) Operating System
B) Language Software
a) Operating system
Operating system coordinates the activity between the user and the computer. An
operating system has huge tasks/ functions. These include:
Controlling operations (control program )
� Coordinates, or supervises the activity of the computer system.
� Decides where programs and data should be stored in the computer memory.
� Handles communications among the computer components, applications software
and the user.
� Controls the saving and retrieving of files to and from disks in the disk drive.
� It performs all its controlling tasks without the involvement or awareness of the
user.
�Is a software that is designed to perform tasks for the specific area or areas. But for
use in more than one installation.
� Are usually called application packages as they may include a number of
programs along with operating instruction, documentation and so forth.
� Depending on their function or task they are categorized in to the following.
Allow you to store information on a computer, retrieve it when you need it and
update it when necessary. You can do this with index cards, but database
Name, sex, Marital status, salary, Date of Birth, Date of employment, Post,
Department Level of education, Field of study etc.
Then you can ask the computer the following question
- How many female workers are there?
- List employees with a salary of birr 500 and above
- List those employees who are department head and have Bachelor degree or higher
and so on.
Example: Dbase IV, FoxPro, Microsoft Access.
This leaves aside the theoretical work in CS, which does not make use of real
computers, but of formal models of computers
A lot of work in CS is done with pen and paper!
Actually, the early work in CS took place before the development of the first
computer
Computer Science is no more about computers than astronomy is about telescopes,
biology is about microscopes, or chemistry is about test tubes. Science is not about
tools. It is about how we use them, and what we find out we can do.
Computer Science is the study of how to write computer programs
(programming) (??)
Programming is a big part of CS…but it is not the most important part.
Computer Science is the study of the uses and applications of computers
and software (??)
Learning to use software packages is no more a part of CS than driver’s education
is part of automotive engineering.
CS is responsible for building and designing software.
The study of algorithms:
correctness, limits
efficiency/cost
b. Their hardware realizations:
computer design
c. Their linguistic realizations
1. Helping People
2. Solving Problems
– Problem: A perceived difference between an existing condition and a
desired condition.
– Problem Solving: The process of recognizing a problem, identifying
alternatives for solving it, and successfully implementing the chosen
solution.
3. Improving Our Lives
Knowledge
Created based on the information/data or own expertise
Answers How?
Is dynamic & context based
Helps in decision making
Wisdom
Gives detailed understanding
Arriving at the judgment
Helps finalize future decisions/actions
2.2.1 Software
The economies of ALL developed nations are dependent on software. More and
more systems are software controlled
Software engineering is concerned with theories, methods and tools for professional
software development. Software engineering expenditure represents a significant
fraction of budget in all developed countries. More and more, individuals and
society rely on advanced software systems. We need to be able to produce reliable
and trustworthy systems economically and quickly.
It is usually cheaper, in the long run, to use software engineering methods and
techniques for software systems rather than just write the programs as if it was a
personal programming project. For most types of system, the majority of costs are
the costs of changing the software after it has gone into use.
All software engineers use tools, and they have done so since the days of the first
assemblers. Some people use stand-alone tools, while others use integrated
collections of tools, called environments. Over time, the number and variety of tools
has grown tremendously. They range from traditional tools like editors, compilers
and debuggers, to tools that aid in requirements gathering, design, building GUIs,
Introduction To Computing & Software Engineering 8
Compiled by: Tesfaye M.
generating queries, defining messages, architecting systems and connecting
components, testing, version control and configuration management, administering
databases, reengineering, reverse engineering, analysis, program visualization, and
metrics gathering, to full-scale, process centered and software engineering
environments that cover the entire lifecycle, or at least significant portions of it.
Indeed, modern software engineering cannot be accomplished without reasonable
tool support.
The role of computers, their power, and their variety, are increasing at a dramatic
pace. Competition is keen throughout the computer industry, and time to market
often determines success. There is, therefore, mounting pressure to produce software
quickly and at reasonable cost. This usually involves some mix of writing new
software and finding, adapting and integrating existing software. Tool and
environment support can have a dramatic effect on how quickly this can be done, on
how much it will cost, and on the quality of the result. They often determine
whether it can be done at all, within realistic economic and other constraints, such as
safety and reliability. Software engineering tools and environments are therefore
becoming increasingly important enablers, as the demands for software, and its
complexity, grow beyond anything that was imagined at the inception of this field
just a few decades ago.
To predict time, effort, and cost!
To improve software quality!
To improve maintainability!
To meet increasing demands!
To lower software costs!
To successfully build large, complex software systems!
To facilitate group effort in developing software!
Characteristics of s/w can be easily distinguished as of from the h/w. For hardware
products, it can be observed that failure rate is initially high but decreases as the
faulty components are identified and removed. The system then enters its useful life.
After some time (called product life time) the components wear out, and the failure
rate increases. This gives the plot of hardware reliability over time their
characteristic is like “bath tub” shape.
Software engineering appears to be among the few options available to tackle the
present software crisis.
Let us explain the present software crisis in simple words, by considering the
following.
The expenses that organizations all around the world are incurring on software
purchases compared to those on hardware purchases have been showing a worrying
trend over the years
Organizations are spending larger and larger portions of their budget on software not only are the
software products turning out to be more expensive than hardware, but also presented lots of
other problems to the customers such as: software products are difficult to alter, debug, and
enhance; use resources non optimally; often fail to meet the user requirements; are far from being
reliable; frequently crash; and are often delivered late.
Due to ineffective development of the product characterized by inefficient resource
usage and time and cost over-runs. Other factors are larger problem sizes, lack of
adequate training in software engineering, increasing skill shortage, and low
productivity improvements.
I t er at iv e dev elo p m e n t
0 25 50 75 1 00
Sp ec i f i c at i o n I t er at i v e dev el o p m en t Sy st em t est i n g
0 25 50 75 1 00
Sp ec i f i c at i o n D ev el o p m en t I n t egr at i o n an d t est i n g
D ev e lo p m en t an d ev o lut io n co st s fo r lo n g- lifet im e sy st em s
0 10 20 0 30 400
Sy st em dev el o p m en t Sy st em ev o l ut i o n
Software systems that are intended to provide automated support for software
process activities. CASE systems are often used for method support. Support
routine activities in the software process such as editing design diagrams, checking
diagram consistency and keeping track of program tests which have been run.
There are 2 categories.
Upper-CASE: Tools to support the early process activities of requirements and
design such as Use Case,…
Introduction To Computing & Software Engineering 16
Compiled by: Tesfaye M.
Lower-CASE: Tools to support later activities such as programming, debugging and
testing like Java, C++,.
For example:
843 = 8 x 102 + 4 x 101 + 3 x 100
= 8 x 100 + 4 x 10 + 3 x 1
= 800 + 40 + 3
For whole numbers, the rightmost digit position is the one’s position (10 0 = 1). The numeral in that
position indicates how many ones are present in the number. The next position to the left is ten’s, then
hundred’s, thousand’s, and so on. Each digit position has a weight that is ten times the weight of the
position to its right.
In the decimal number system, there are ten possible values that can appear in each digit position, and
so there are ten numerals required to represent the quantity in each digit position. The decimal numerals
are the familiar zero through nine (0, 1, 2, 3, 4, 5, 6, 7, 8, 9).
In a positional notation system, the number base is called the radix. Thus, the base ten system that we
normally use has a radix of 10. The term radix and base can be used interchangeably. When writing
numbers in a radix other than ten, or where the radix isn’t clear from the context, it is customary to
specify the radix using a subscript. Thus, in a case where the radix isn’t understood, decimal numbers
would be written like this:
Generally, the radix will be understood from the context and the radix specification is left off.
The binary number system is also a positional notation numbering system, but in this case, the base is
not ten, but is instead two. Each digit position in a binary number represents a power of two. So, when
we write a binary number, each binary digit is multiplied by an appropriate power of 2 based on the
position in the number:
For example:
101101 = 1 x 25 + 0 x 24 + 1 x 23 + 1 x 22 + 0 x 21 + 1 x 20
= 1 x 32 + 0 x 16 + 1 x 8 + 1 x 4 + 0 x 2 + 1 x 1
= 32 + 8 + 4 + 1
In the binary number system, there are only two possible values that can appear in each digit position
rather than the ten that can appear in a decimal number. Only the numerals 0 and 1 are used in binary
numbers. The term ‘bit’ is a contraction of the words ‘binary’ and ‘digit’, and when talking about binary
numbers the terms bit and digit can be used interchangeably. When talking about binary numbers, it is
often necessary to talk of the number of bits used to store or represent the number. This merely
describes the number of binary digits that would be required to write the number. The number in the
above example is a 6 bit number.
2
Prepared for AASTU
10110
\ \ \___________1 x 21 = 2
\ \____________1 x 22 = 4
\_______________1 x 24 = 16
22
11011
\ \ \ \_________1 x 20 = 1
\ \ \__________1 x 21 = 2
\ \_____________1 x 23 = 8
\______________1 x 24 = 16
27
The method for converting a decimal number to binary is one that can be used to convert from decimal
to any number base. It involves using successive division by the radix until the dividend reaches 0. At
each division, the remainder provides a digit of the converted number starting with the least significant
digit.
Hexadecimal Numbers
In addition to binary, another number base that is commonly used in digital systems is base 16. This
number system is called hexadecimal, and each digit position represents a power of 16. For any number
base greater than ten, a problem occurs because there are more than ten symbols needed to represent
the numerals for that number base. It is customary in these cases to use the ten decimal numerals
followed by the letters of the alphabet beginning with A to provide the needed numerals. Since the
hexadecimal system is base 16, there are sixteen numerals required. The following are the hexadecimal
numerals:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
The reason for the common use of hexadecimal numbers is the relationship between the numbers 2 and
16. Sixteen is a power of 2 (16 = 24). Because of this relationship, four digits in a binary number can be
represented with a single hexadecimal digit. This makes conversion between binary and hexadecimal
numbers very easy, and hexadecimal can be used to write large binary numbers with much fewer digits.
When working with large digital systems, such as computers, it is common to find binary numbers with
8, 16 and even 32 digits. Writing a 16 or 32 bit binary number would be quite tedious and error prone.
By using hexadecimal, the numbers can be written with fewer digits and much less likelihood of error.
To convert a binary number to hexadecimal, divide it into groups of four digits starting with the rightmost
digit. If the number of digits isn’t a multiple of 4, prefix the number with 0’s so that each group contains
4 digits. For each four digit group, convert the 4 bit binary number into an equivalent hexadecimal digit.
To convert a hexadecimal number to a binary number, convert each hexadecimal digit into a group of 4
binary digits.
3 7 4 F
Convert the hex digits to binary 0011 0111 0100 1111
4
Prepared for AASTU
00110111010011112
There are several ways in common use to specify that a given number is in hexadecimal representation
rather than some other radix. In cases where the context makes it absolutely clear that numbers are
represented in hexadecimal, no indicator is used. In much written material where the context doesn’t
make it clear what the radix is, the numeric subscript 16 following the hexadecimal number is used. In
most programming languages, this method isn’t really feasible, so there are several conventions used
depending on the language. In the C and C++ languages, hexadecimal constants are represented with a
‘0x’ preceding the number, as in: 0x317F, or 0x1234, or 0xAF. In assembler programming languages that
follow the Intel style, a hexadecimal constant begins with a numeric character (so that the assembler
can distinguish it from a variable name), a leading ‘0’ being used if necessary, with the letter ‘h’ suffixed
onto the number. In Intel style assembler format: 371Fh and 0FABCh are valid hexadecimal constants.
Note that: A37h isn’t a valid hexadecimal constant. It doesn’t begin with a numeric character, and so will
be taken by the assembler as a variable name. In assembler programming languages that follow the
Motorola style, hexadecimal constants begin with a ‘$’ character. So in this case: $371F or $FABC or $01
are valid hexadecimal constants.
For example: The decimal number 136 would be represented in BCD as follows:
Conversion of numbers between decimal and BCD is quite simple. To convert from decimal to BCD,
simply write down the four bit binary pattern for each decimal digit. To convert from BCD to decimal,
divide the number into groups of 4 bits and write down the corresponding decimal digit for each 4 bit
group.
There are a couple of variations on the BCD representation, namely packed and unpacked. An unpacked
BCD number has only a single decimal digit stored in each data byte. In this case, the decimal digit will
be in the low four bits and the upper 4 bits of the byte will be 0. In the packed BCD representation, two
decimal digits are placed in each byte. Generally, the high order bits of the data byte contain the more
significant decimal digit.
5
The use of BCD to represent numbers isn’t as common as binary in most computer systems, as it is not
as space efficient. In packed BCD, only 10 of the 16 possible bit patterns in each 4 bit unit are used. In
unpacked BCD, only 10 of the 256 possible bit patterns in each byte are used. A 16 bit quantity can
represent the range 0-65535 in binary, 0-9999 in packed BCD and only 0-99 in unpacked BCD.
When fixed precision numbers are used, (as they are in virtually all computer calculations) the concept
of overflow must be considered. An overflow occurs when the result of a calculation can’t be represented
with the number of bits available. For example when adding the two eight bit quantities: 150 + 170, the
result is 320. This is outside the range 0-255, and so the result can’t be represented using 8 bits. The
result has overflowed the available range. When overflow occurs, the low order bits of the result will
remain valid, but the high order bits will be lost. This results in a value that is significantly smaller than
the correct result.
When doing fixed precision arithmetic (which all computer arithmetic involves) it is necessary to be
conscious of the possibility of overflow in the calculations.
There are several ways that signed numbers can be represented in binary, but the most common
representation used today is called two’s complement. The term two’s complement is somewhat
ambiguous, in that it is used in two different ways. First, as a representation, two’s complement is a way
of interpreting and assigning meaning to a bit pattern contained in a fixed precision binary quantity.
Second, the term two’s complement is also used to refer to an operation that can be performed on the
bits of a binary quantity. As an operation, the two’s complement of a number is formed by inverting all
of the bits (also called one’s complement) and adding 1. In a binary number being interpreted using the
two’s complement representation, the high order bit of the number indicates the sign. If the sign bit is
0, the number is positive, and if the sign bit is 1, the number is negative. For positive numbers, the rest
of the bits hold the true magnitude of the number. For negative numbers, the lower order bits hold the
complement (or bitwise inverse) of the magnitude of the number. It is important to note that two’s
complement representation can only be applied to fixed precision quantities, that is, quantities where
there are a set number of bits.
6
Prepared for AASTU
Two’s complement representation is used because it reduces the complexity of the hardware in the
arithmetic-logic unit of a computer’s CPU. Using a two’s complement representation, all of the
arithmetic operations can be performed by the same hardware whether the numbers are considered to
be unsigned or signed. The bit operations performed are identical, the difference comes from the
interpretation of the bits. The interpretation of the value will be different depending on whether the
value is considered to be unsigned or signed.
For example: Find the 2’s complement of the following 8 bit number
00101001
Another example: Find the 2’s complement of the following 8 bit number
10110101
The counting sequence for an eight bit binary value using 2’s complement representation appears as
follows:
01111111 7Fh 127 largest magnitude positive number
01111110 7Eh 126
01111101 7Dh 125
…
00000011 03h
00000010 02h
00000001 01h
00000000 00h
11111111 0FFh -1
11111110 0FEh -2
11111101 0FDh -3
…
10000010 82h -126
10000001 81h -127
10000000 80h -128 largest magnitude negative number
Notice in the above sequence that counting up from 0, when 127 is reached, the next binary pattern in
the sequence corresponds to -128. The values jump from the greatest positive number to the greatest
negative number, but that the sequence is as expected after that. (i.e. adding 1 to –128 yields –127, and
so on.). When the count has progressed to 0FFh (or the largest unsigned magnitude possible) the count
wraps around to 0. (i.e. adding 1 to –1 yields 0).
7
Binary Coding
Data in the computer are represented by a coding system. These include
• numeric data (digits 0, 1, …, 9)
• alphanumeric data (A … Z, a … z) and
• special characters (such as +, %, #, /, …)
The ASCII standard was later extended to an eight bit code (which allows 256 unique code patterns) and
various additional symbols were added, including characters with diacritical marks (such as accents) used
in European languages which don’t appear in English.
8
Prepared for AASTU
Ex:
Character Zone Digit Hex. Value
A 1100 0001 C1
B 1100 0010 C2
a 1000 0001 81
b 1000 0010 82
LOGIC GATES
A digital signal: This is a signal that can only have two finite values, usually at the minimum and
maximum of the power supply. Changes between these two values occur instantaneously. Graphically
this is represented by a graph similar to that shown below.
Voltage (V)
Max
When an input or output signal is at the minimum power supply voltage (usually 0V) this is referred to
as a LOW signal or LOGIC 0 signal. When an input or output signal is at the maximum power supply
voltage this is referred to as a HIGH signal or LOGIC 1 signal.
Below we discuss the basic building block of all digital systems, the logic gate.
Logic Gates: The term logic gate actually gives a clue as to the function of these devices in an electronic
circuit. ‘Logic’ implies some sort of rational thought process taking place and a ‘gate’ in everyday
language allows something through when it is opened.
A Logic Gate in an electronic sense makes a ‘logical’ decision based upon a set of rules, and if the
appropriate conditions are met then the gate is opened and an output signal is produced.
Logic gates are therefore the decision making units in electronic systems and there are many different
types for different applications. The different type of gates and the rules each one uses to decide an
appropriate output follow.
9
1) The NOT gate
The NOT gate is unique in that it only has one input. It looks like
The input to the NOT gate is inverted i.e the binary input state of 0 gives an output of 1 and the binary
input state of 1 gives an output of 0.
is known as "NOT A" or alternatively as the complement of .
The truth table for the NOT gate appears as below
0 1
1 0
2 The AND gate
The AND gate has two or more inputs. The output from the AND gate is 1 if and only if all of the inputs
are 1, otherwise the output from the gate is 0. The AND gate is drawn as follows
The output from the AND gate is written as (the dot can be written half way up the line as here
or on the line. Note that some textbooks omit the dot completely).
The truth table for a two-input AND gate looks like
0 0 0
0 1 0
1 0 0
1 1 1
It is also possible to represent an AND gate with a simple analogue circuit, this is illustrated as an
animation.
3 The OR Gate
The OR gate has two or more inputs. The output from the OR gate is 1 if any of the inputs is 1. The gate
output is 0 if and only if all inputs are 0. The OR gate is drawn as follows
0 0 0
0 1 1
10
Prepared for AASTU
1 0 1
1 1 1
where the small circle immediately to the right of the gate on the output line is known as an invert
bubble.
The output from the NAND gate is written as (the same rules apply regarding the placement and
appearance of the dot as for the AND gate - see the section on basic logic gates). The Boolean expression
reads as "A NAND B".
The truth table for a two-input NAND gate looks like
0 0 1
0 1 1
1 0 1
1 1 0
5 The NOR gate
The NOR gate has two or more inputs. The output from the NOR gate is 1 if and only if all of the
inputs are 0, otherwise the output is 0. This output behaviour is the NOT of A OR B. The NOR gate
is drawn as follows
The output from the NOR gate is written as which reads "A NOR B".
The truth table for a two-input NOR gate looks like
0 0 1
11
0 1 0
1 0 0
1 1 0
6 The eXclusive-OR (XOR) gate
The exclusive-OR or XOR gate has two or more inputs. For a two-input XOR the output is similar to that
from the OR gate except it is 0 when both inputs are 1. This cannot be extended to XOR gates comprising
3 or more inputs however.
In general, an XOR gate gives an output value of 1 when there are an odd number of 1's on the inputs to
the gate. The truth table for a 3-input XOR gate below illustrates this point.
The XOR gate is drawn as
The output from the XOR gate is written as which reads "A XOR B".
The truth table for a two-input XOR gate looks like
0 0 0
0 1 1
1 0 1
1 1 0
The output from the XNOR gate is written as which reads "A XNOR B".
The truth table for a two-input XNOR gate looks like
0 0 1
0 1 0
1 0 0
12
Prepared for AASTU
1 1 1
13