The Software Cost Increases As The Hardware Cost Decreases in Parallel To The Development of Technology. at The Beginning The Reverse Was True

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 6

31/01/2010

WHAT IS A COMPUTER?

Let us start this first lecture by asking the question "what is a computer" and answering it.

A Computer is a device capable of performing computations and making logical decisions and
speeds much much faster than human beings can. Even the Personal Computers (PCs) gain
the ability to perform multiple millions of calculations per second today.

Computers process data under the control of sets of instructions called computer programs.
Computer programs can be written for many different purposes. The simulation of scientific
problems, forecasting of the future of many economic and social activities. Professional
programs are written by people called computer programmer. Specific programs are written
by people who need them. These people do not have to be a computer programmer or
scientist. It's not difficult to learn computer languages to write programs.

Devices such as the keyboard, screen, disks (internal and external), and processing units
comprising a computer are called hardware. The computer programs that run on a computer
are referred to as software. The software cost increases as the hardware cost decreases in
parallel to the development of technology. At the beginning the reverse was true.

COMPUTER ORGANISATION

Virtually every computer, regardless of differences in physical shapes, can be envisioned as


being divided into six logical units or sections.

1. Input Unit: This is the "receiving" section of the computer. Both programs and data on
which these programs will act are given to the computer trough this section.

2. Output Unit: This is the "shipping" section of the computer. It takes information that has
been process by the computer and places it on various output devices to make the information
available for use outside the computer.

3. Memory Unit: This is the rapid access "warehouse" section of the computer. It stores
information that has been entered through the input unit so that the information may be
immediately available for processing. Memory can also hold the information that has already
been processed until the information can be placed on output devices by the output unit.
Memory capacity is referred to as RAM (RANDOM ACCESS MEMORY) capacity.

4. Arithmetic and Logical Unit (ALU): This is the "manufacturing" section of the computer. It
is responsible for performing calculations (addition, subtraction, multiplication and division).
This unit contains the decision mechanism as well. Through this mechanism it compares two
items from the memory unit to determine whether or not they are equal and take actions
accordingly.
5. Central Processing Unit (CPU): This is the "administrative" section of the computer. It is
the computer's coordinator and is responsible for supervising the operation of other sections.
The CPU tells the input unit when information should be read into the memory unit, and tells
the ALU when information from the memory unit should be utilized in calculations, and tells
the output unit when to send the information from the memory unit to certain output devices.

6. Secondary Storage Unit: This the long term high-capacity warehouse section of the
computer. Programs or/and data not actively being used by other units are generally stored on
secondary storage devices.

EVOLUTION OF OPERATING SYSTEMS

Computer applications today require a single machine to perform many operations and the
applications may compete for the resources of the machine. This demands a high degree of
coordination. This coordination is handled by system software known as the operating system
(OS).

OS for Batch Jobs

Early computers were capable of performing only one job or task at a time. This form of
computer operation is often called single-user batch processing. The computer runs a single
program at a time while processing data in groups or batches. In batch processing jobs are
queued up so that as soon as one completed, the next would start. Users often took their jobs
to the computer operators and came back and wait their job to be executed. Sometimes the
waiting time would be as long as a day. Users had no interaction with computer during
program execution. Maybe okay for some applications, but not for all.

As computers became more powerful, it became evident that single-user batch processing
rarely utilize the computer resources efficiently. Then it was told that many jobs or tasks could
made to share the resources of the computer to achieve better utilization of the computer
resources. This is called multiple programming. Multiple programming involves the
simultaneous operation of many jobs on the same computer.

Time-sharing is sharing a computing resource among many users by means of


multiprogramming and multi-tasking. Time-sharing technology allows a large number of
users to interact concurrently with a single computer, time-sharing dramatically lowered the
cost of providing computing capability, made it possible for individuals and organizations to
use a computer without owning one, and promoted the interactive use of computers and the
development of new interactive applications.
A personal computer (PC) is any general-purpose computer whose size and capabilities
make it useful for individuals, and which is intended to be operated directly by an end user,
with no intervening computer operator.

A personal computer may be a desktop computer, a laptop, tablet PC or a handheld PC (also


called palmtop). The most common microprocessors in personal computers are x86-
compatible CPUs. Software applications for personal computers include word processing,
spreadsheets, databases, Web browsers and e-mail clients, games, and myriad personal
productivity and special-purpose software. Modern personal computers often have high-speed
or dial-up connections to the Internet, allowing access to the World Wide Web and a wide
range of other resources. A PC may be used at home, or may be found in an office. Personal
computers can be connected to a local area network (LAN) either by a cable or wirelessly

Distributed computing is a field of computer science that studies distributed systems. A


distributed system consists of multiple autonomous computers that communicate through a
computer network. The computers interact with each other in order to achieve a common
goal. A computer program that runs in a distributed system is called a distributed program,
and distributed programming is the process of writing such programs.

Distributed computing also refers to the use of distributed systems to solve computational
problems. In distributed computing, a problem is divided into many tasks, each of which is
solved by one computer

Client-server computing or networking is a distributed application architecture that partitions


tasks or work loads between service providers (servers) and service requesters, called clients.
[1]
Often clients and servers operate over a computer network on separate hardware. A server
machine is a high-performance host that is running one or more server programs which share
its resources with clients. A client does not share any of its resources, but requests a server's
content or service function. Clients therefore initiate communication sessions with servers
which await (listen to) incoming requests.

Machine Languages, Assembly Languages, and High-level Languages

Programmers write instructions in various programming languages, some directly


understandable by the computer, and others that require intermediate translation steps. Many
different computer languages exit and are used today. These languages are divided into three
general types.:

1. Machine Languages (Machine Codes)

2. Assembly Languages

3. High-level Languages
Machine code or machine language is a system of instructions and data executed directly by
a computer's central processing unit. Machine code may be regarded as a primitive (and
cumbersome) programming language or as the lowest-level representation of a compiled
and/or assembled computer program. Every processor or processor family has its own
machine code instruction set.

Typical instruction set:

a) Arithmetic such as add and subtract

b) Logic instructions such as and, or, and not

c) Data instructions such as move, input, output, load, and store

d) Control flow instructions such as goto, if ... goto, call, and return.

Instructions are patterns of bits that by physical design correspond to different commands to
the machine.

A bit is the basic unit of information in computing and telecommunications; it is the amount
of information that can be stored by a device or other physical system that can normally exist
in only two distinct states. These may be the two stable positions of an electrical switch, two
distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two
directions of magnetization or polarization, etc.

In computing, a bit is defined as a variable or computed quantity that can have only two
possible values. These two values are often interpreted as binary digits and are usually
denoted by the Arabic numerical digits 0 and 1. Indeed, the term "bit" is a contraction of
binary digit. The two values can also be interpreted as logical values (true/false, yes/no).
algebraic signs (+/), activation states (on/off), or any other two-valued attribute. In several
popular programming languages, numeric 0 is equivalent (or convertible) to logical false, and
1 to true. The correspondence between these values and the physical states of the underlying
storage or device is a matter of convention, and different assignments may be used even
within the same device or program.

In information theory, one bit is typically defined as the uncertainty of a binary random
variable that is 0 or 1 with equal probability, or the information that is gained when the value
of such a variable becomes known.

In quantum computing, a quantum bit or qubit is a quantum system that can exist in
superposition of two bit values, "true" and "false".

The symbol for bit, as a unit of information, is "bit" or (lowercase) "b"; the latter being
recommended by the IEEE 1541 Standard
The instruction set is thus specific to a class of processors using (much) the same architecture.
Successor or derivative processor designs often include all the instructions of a predecessor
and may add additional instructions. Occasionally a successor design will discontinue or alter
the meaning of some instruction code (typically because it is needed for new purposes),
affecting code compatibility to some extent; even nearly completely compatible processors
may show slightly different behavior for some instructions but this is seldom a problem.
Systems may also differ in other details, such as memory arrangement, operating systems, or
peripheral devices; because a program normally relies on such factors, different systems will
typically not run the same machine code, even when the same type of processor is used.

Assembly languages

Assembly languages are a family of low-level languages for programming computers,


microprocessors, microcontrollers, and other (usually) integrated circuits. They implement a
symbolic representation of the numeric machine codes and other constants needed to program
a particular CPU architecture. This representation is usually defined by the hardware
manufacturer, and is based on abbreviations (called mnemonics) that help the programmer
remember individual instructions, registers, etc. An assembly language is thus specific to a
certain physical or virtual computer architecture (as opposed to most high-level languages,
which are usually portable).

A utility program called an assembler is used to translate assembly language statements into
the target computer's machine code. The assembler performs a more or less isomorphic
translation (a one-to-one mapping) from mnemonic statements into machine instructions and
data. This is in contrast with high-level languages, in which a single statement generally
results in many machine instructions.

Many sophisticated assemblers offer additional mechanisms to facilitate program


development, control the assembly process, and aid debugging. In particular, most modern
assemblers include a macro facility (described below), and are called macro assemblers

A program written in assembly language consists of a series of instructions--mnemonics that


correspond to a stream of executable instructions, when translated by an assembler, that can be
loaded into memory and executed. The following assembly language instruction copies the hexa
decimal value 61 into the processor register AL.

MOV AL, 61h means

I will discuss briefly binary and hexadecimal representations. The programs that convert these English
like instructions to its binary representation so that machine can understand are called assemblers.

add al,[170]

This instruction means take the value at the memory location with cell number 170 and added to the
value stored in AL register. Assembly language is not a user friend language and it is cumbersome to
write programs. Programs written in assembly languages are not portable.
High Level Languages

A high-level programming language is a programming language with strong abstraction


(soyutlanm) from the details of the computer. In comparison to low-level programming
languages, it may use natural language elements, be easier to use, or be more portable across
platforms. Such languages hide the details of CPU operations such as memory access models
and management of scope.

This greater abstraction and hiding of details is generally intended to make the language user-
friendly, as it includes concepts from the problem domain instead of those of the machine
used. A high-level language isolates the execution semantics of a computer architecture from
the specification of the program, making the process of developing a program simpler and
more understandable with respect to a low-level language. The amount of abstraction
provided defines how "high-level" a programming language is.

Programming languages such as C, FORTRAN, Pascal, Basic, Java, HTML that enables a
programmer to write programs that are more or less independent of a particular type of
computer. Such Languages are considered high-level languages because they are closer to the
human languages and further from the machine languages. The main advantage of high-level
languages over low-level languages is that they are easy to read, write, and maintain.
Ultimately, programs written in a high-level language must be translated into machine
languages by a compiler or interpreter.

The first high-level programming languages were designed in the 1950s. Now there are
dozens of different languages, including Ada, Algol, BASIC, COBOL, C, C++, FORTRAN,
LISP, Pascal, and Prolog

In this course we will extensively study C language a high-level programming language.

You might also like