Name:-M.Dawood Roll No. CS-1810 Submitted To: - Prof - Ihteshaam
Name:-M.Dawood Roll No. CS-1810 Submitted To: - Prof - Ihteshaam
Dawood
Computers
Computers was originally invented to carry out numerical calculations in the 1930s-40s. Later
they were gradually developed to process all kinds of data, such as numbers, texts, and other
types of media. A computer system consists of hardware and software.Hardware is
the equipments used to perform the necessary computations. Software consists
of programs that control the hardware to process data.
Hardware
Major hardware components of a computer include memory, processing unit, and input/output
devices (monitor, keyboard, mouse, printer, and so on).Memory is the place where the
programs and data are stored. It can be imagined as an ordered sequence of storage locations
called memory cells. Each cell has a unique address, which is like a serial number of the cell in
the memory.The data stored in a memory cell are called the contents of the cell. Program is
treated as a special type of data. A memory cell contains a sequence of binary digit, or bit. Each
bit is either a 0 or a 1. All kinds of data are eventually represented by sequences of bits. A
sequence of 8 bits is usually called a byte, which can represent a character, such as the ones on
a keyboard.In a computer, there are several types of memory. There is a distinction
between main memory and secondary memory --- the former is faster and smaller, while the
latter is cheaper, and often removable. At the current time, the former is usually in silicon
chips, while the latter in hard disks, flash drives, CD/DVDs and so on.There are two types of
main memory: RAM (random access memory) and ROM (read-only memory). Their major
difference is that the content of RAM can be modified. In the following, "main memory" means
RAM.In a computer, most of the operations are performed by a CPU (central processing unit),
and there are computers with multiple CPUs. At the current time, a CPU is in a single integrated
circuit(IC), or call it a chip. The CPU follows the instructions contained in a program (written in a
computer-understandable language). In each step, the CPU fetches (i.e., retrieves) an
instruction, interprets its content to decide what to do, and then do it, which may mean to
move data from one place to another, or change data in a certain way. Other common
operations include addition, subtraction, multiplication, division, comparison, and so on.CPU
usually executes instructions one after another, but can also jump to another memory cell
according to an instruction.
A computer uses its input/output (I/O) devices to communicate with human users and other
computers.For a human user, the usual input device is a keyboard and a mouse, and the usual
output device is a monitor (display screen), and a printer. The human-computer interaction
(HCI) can either happen in a command-line user interface, or a graphical user interface (GUI).
Software
Every software is eventually executed by the CPU, in a machine language in which each line in a
program is an instruction to the hardware. Since programs in this language are not easily
understandable by a human user, the same program is usually also described in other, more
human-readable languages.One type of them is assembly language, in which the instructions
are represented by symbols and numbers. Another type of language are more human-oriented,
called "high-level languages", which are closer to mathematical languages and natural
languages (such as English), as well as machine-independent.
Typical examples of high-level language include FORTRAN, ALGOL, COBOL, BASIC, Pascal, LISP,
Prolog, Perl, C, C++, and Java.
The translation from high-level languages and assembly languages into machine languages is
accomplished by special programs: compilers, interpreters, and assemblers.
A compiler translates a source program in a high-level language into an object program in the
machine language. An interpreter interprets and executes a program in a high-level language
line by line. An assembler translates a source program in an assembly language into
an object program in the machine language.
A high-level language usually comes with many ready-made common programs, so the user can
include them in programs, rather than rewrite them. The program responsible for this is called
a "linker". It links user object programs and related "library programs", and produces
executable programs.
There are software packages called "integrated development environment" (IDE) which
organize all the related software (e.g., editor, compiler, linker, loader, debugger) together to
support the development of a software.
When a program in machine language runs, it typically gets some input data from the memory,
process them according to the predetermined procedure, then store some output data into the
memory, and display some information to the user.
In a computer, there is a software product that occupies a special position: the operating
system (OS). Most of other software products are application software, which are managed and
supported by the OS.
When a computer is turned on, it starts by executing part of the OS that is stored in a ROM,
which then loads the rest of the OS from hard disk and starts it. This process is called "booting".
At the current time, the most often used OS include Unix/Linux, Microsoft Windows, and
Macintosh OS. It is possible for a computer to have more than one OS stored in its memory, but
usually only one can be used at a time.
An application software uses the computer to accomplish a specific task. They are usually
purchased on CD or DVD, or downloaded from the Internet, and installed into the computer (so
it is stored in memory and known to the OS), before they can be used in the computer.
Software development
Very often, steps in the above procedure need to be repeated to fix the errors found in the
process.
There are many ways to write an algorithm. Some are very informal, some are quite formal and
mathematical in nature, and some are quite graphical. The instructions for connecting a DVD
player to a television are an algorithm. A mathematical formula such as πR2 is a special case of
an algorithm. The form is not particularly important as long as it provides a good way to
describe and check the logic of the plan.
The development of an algorithm (a plan) is a key step in solving a problem. Once we have an
algorithm, we can translate it into a computer program in some programming language. Our
algorithm development process consists of five major steps.
1. Step 1: Obtain a description of the problem. This step is much more difficult than it appears.
2. Step 2: Analyze the problem.
3. Step 3: Develop a high-level algorithm.
4. Step 4: Refine the algorithm by adding more detail.
5. Step 5: Review the algorithm.
Development of algorithm
The development of an algorithm (a plan) is a key step in solving a problem. Once we have
analgorithm, we can translate it into a computer program in some programming language.
Analysis of algoithm
Design of algorithm
An algorithm is a series of instructions often referred to as a “process,” which is to be followed
when solving a particular problem. While technically not restricted by definition, the word is
almost invariably associated with computers, since computer-processed algorithms can tackle
much larger problems than a human, much more quickly. Since modern computing uses
algorithms much more frequently than at any other point in human history, a field has grown
up around their design, analysis, and refinement. The field of algorithm design requires a strong
mathematical background, with computer science degrees being particularly sought-after
qualifications. It offers a growing number of highly compensated career options, as the need for
more (as well as more sophisticated) algorithms continues to increase.
Conceptual Design
At their simplest level, algorithms are fundamentally just a set of instructions required to
complete a task. The development of algorithms, though they generally weren’t called that, has
been a popular habit and a professional pursuit for all of recorded history. Long before the
dawn of the modern computer age, people established set routines for how they would go
about their daily tasks, often writing down lists of steps to take to accomplish important goals,
reducing the risk of forgetting something important. This, essentially, is what an algorithm is.
Designers take a similar approach to the development of algorithms for computational
purposes: first, they look at a problem. Then, they outline the steps that would be required to
resolve it. Finally, they develop a series of mathematical operations to accomplish those steps.
Testing Algorithm
This section provides an introduction to software testing and the testing of Artificial Intelligence
algorithms. introduces software testing and focuses on a type of testing relevant to algorithms
called unit testing. provides a specific example of an algorithm and a prepared suite of unit
tests, and provides some rules-of-thumb for testing algorithms in general.
Software Testing
Software testing in the field of Software Engineering is a process in the life-cycle of a software
project that verifies that the product or service meets quality expectations and validates that
software meets the requirements specification. Software testing is intended to locate defects in
a program, although a given testing method cannot guarantee to locate all defects. As such, it is
common for an application to be subjected to a range of testing methodologies throughout the
software life-cycle, such as unit testing during development, integration testing once modules
and systems are completed, and user acceptance testing to allow the stakeholders to
determine if their needs have been met.
Unit testing is a type of software testing that involves the preparation of well-defined
procedural tests of discrete functionality of a program that provide confidence that a module or
function behaves as intended. Unit tests are referred to as 'white-box' tests (contrasted to
'black-box' tests) because they are written with full knowledge of the internal structure of the
functions and modules under tests. Unit tests are typically prepared by the developer that
wrote the code under test and are commonly automated, themselves written as small
programmers that are executed by a unit testing framework (such as JUnit for Java or the Test
framework in Ruby). The objective is not to test each path of execution within a unit (called
complete-test or complete-code coverage), but instead to focus tests on areas of risk,
uncertainty, or criticality. Each test focuses on one aspect of the code (test one thing) and are
commonly organized into test suites of commonality.
Documentation: The preparation of a suite of tests for a given system provide a type of
programming documentation highlighting the expected behavior of functions and
modules and providing examples of how to interact with key components.
Readability: Unit testing encourages a programming style of small modules, clear input
and output and fewer inter-component dependencies. Code written for easy of testing
(testability) may be easier to read and follow.
Regression: Together, the suite of tests can be executed as a regression-test of the
system. The automation of the tests means that any defects caused by changes to the
code can easily be identified. When a defect is found that slipped through, a new test
can be written to ensure it will be identified in the future .