What Is Serial Computing?: Traditionally, Software Has Been Written For Serial Computation

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

WHAT IS SERIAL COMPUTING?

 Traditionally, software has been written for serial


computation:
 To be run on a single computer having a single
Central Processing Unit (CPU);
 A problem is broken into a discrete series of
instructions.
 Instructions are executed one after another.
 Only one instruction may execute at any moment
in time.
SERIAL COMPUTING
WHAT IS PARALLEL COMPUTING?

 Parallel computing is defined as the simultaneous use of more than


one processor to execute a program.(divide large task into smaller
tasks)
 To be run using multiple CPUs
 A problem is broken into discrete parts that can be solved concurrently
 Each part is further broken down to a series of instructions
 Instructions from each part execute simultaneously on different CPUs
 several operations can be performed simultaneously therefore the total
computation time is reduced.
 The parallel version has the potential of being 3 times as fast as the
sequential machine.
RESOURCES
 The compute resources can include:
 A single computer with multiple processors;
 An arbitrary number of computers connected by a network;
 A combination of both.
 The computational problems are :
 Broken apart into discrete pieces of work that can be solved simultaneously;
 It is often difficult to divide a program in such a way that separate CPUs can execute different
portions without interfering with each other. .
 Dependencies are important to parallel programming because they are one of the primary
inhibitors to parallelism.eg.write x-read x
 For (I=0; I<500; i++) (loop dependency)
 a(I) = a(I-1) + 1;
 Load balancing……………….
 The program has to have instructions to guide it to run in parallel. Since the work is shared
or distributed amongst "different" processors, data has to be exchanged now and then.
ARCHITECTURE FOR PARALLEL COMPUTING

 SISD: single processing unit receives a single stream


of instructions that operate on a single stream of
data.
 MISD: the same input is to be subjected to several
different operations.
 SIMD: all processors execute the same instruction,
each on a different data
 MIMD: all processors execute the Different
instruction, each on a different data.
TWO FORMS OF INTER PROCESSOR
COMMUNICATION

N processors each with its own individual data


stream i.e. SIMD and MIMD. , it is usually
necessary to communicate data / results between
processors.
Two types-:
 Using Shared memory Parallel Computer
 Using Distributed memory Parallel
Computer(require distributed memory software)
multiple processors
are connected to
multiple memory
modules such that
each memory
location has a single
address space
throughout the system.

solves the
interprocessor
communication
problem but
introduces the
problem of
simultaneous
accessing of the same SHARED
MEMORY
location in the X IS SHARED VARIABLE
1.FIRSTLY P1 READ THEN P2
2. P2 READ THEN P1

memory.
3. BOTH READ SIMULTANEOUSLY BOTH NOT EXECUTE SIMULTANEOUSLY THAT VALUE WIL BE IN MEM WHICH IS EXECUTED LAST.
4.IF BOTH EXECUTED SIMULTANEOUSLY THEN PROBLEM OF NON-DETERMINACY WILL ARISE CAUSED BY RACE CONDITION(TWO STATEMENTS IN CONCURRENT TASKS ACCESS THE SAME MEMORY LOCATION)
5. SOLVED BY SYNCHRONISING THE USE OF SHARED DATA (I.E X=X+1 AND X=X+2 COULD NOT BE EXECUTED AT THE SAME TIME, )
• Connecting independent computers
via an interconnection network

• Each computer has its own memory


address space. A processor can
only access its own local memory.

• Send a message to the desired


processor for accessing a certain
value residing in a different
computer via MPI (Message
passing interface).

• P1 P2
• receive (x,P2) send (x, P1)

value of x is explicitly passed from


DISTRIBUTED

P2 to P1. This is known as
message passing

MEMORY
ADVANTAGES OF PARALLEL COMPUTING
 Solves problem that require more memory space
than the single CPU can provide(solves those
problem which require much memory space)
(provide very large memory).
 Much faster as compared to fastest serial
computing(by inc no of transistor on a single chip).
 Much cheapest as compared to fastest serial
computing
WHAT IS THE DIFFERENCE BETWEEN
THREAD & PROCESS.
 The ability of a program to do multiple things simultaneously is
implemented through threads ( thread is a basic unit for execution)
 A thread is scheduled by operating system and executed by CPU.
 A thread is a portion of a program that the Operating System tells the
CPU to run, a stream of instructions.
 Thread can be defined as a semi-process with a definite starting point,
an execution sequence and a terminating point. 
 Process has its own memory area and data, but the thread shares
memory and data with the other threads with in the program memory.
 A process/program, therefore, consists of many such threads each
running at the same time within the program and performing a
unique task.
MULTITHREADING
 Multi-threading is the program's ability to break itself down to
multiple concurrent threads that can be executed separately by
the computer.
 Software architects began writing operating systems that
supported running pieces of programs, called threads.
 Threads are organized into processes, which are composed of one
or more threads.
 multithreading operating systems made it possible for one
thread to run while another was waiting for something to happen.
 Rather than being developed as a long single sequence of
instructions, programs are broken into logical operating sections.
CONTINUE……………………………
 If the application performs operations that run
independently of each other, those operations can be
broken up into threads whose execution is scheduled
and controlled by the operating system.
 On single processor systems, these threads are executed
sequentially, not concurrently .
 But give u the illusion as if threads are being executed
simultaneously AS timeslicing of multi tasking.
 Large programs that use multithreading often run many
more than two threads.
TYPES OF MULTI THREADING

 Functionally decomposed multithreading-:


The processor switches back and forth between the two
thread quickly enough that both processes appear to occur
simultaneously. Threaded for functionality concern
(applications)
 Data-decomposed multithreading-: Multithreaded
programs can also be written to execute the same task on
parallel threads. This is called data-decomposed
multithreaded, where the threads differ only in the data that
is processed. Threaded for throughoutput performance
ADVANTAGES OF MULTITHREADING
 Improved performance and concurrency
 Multithreading allows you to achieve multitasking in a program.
Multitasking is the ability to execute more than one task at the same
time.
 Minimized system resource usage,
 simultaneous access to multiple applications
 program structure simplification.
 Better responsiveness to the user - if there are operations which can take
long time to be done, these operations can be put in a separate thread
DISADVANTAGES OF MULTITREADING
 Creating overhead for the processor - each time the CPU finishes with a
thread it should write in the memory (a stack for every thread in which state
of thread is stored)the point it has reached, because next time when the
processor starts that thread again it should know where it has finished and
where to start from –
 The code become more complex - using thread into the code makes the code

difficult read and debug –


 Sharing the resources among the threads can lead to deadlocks (p1 uses r1 &

p2 uses r2 while using p1 needs r2 & p2 needs r1 to complete its task i.e
deadlock)
 Difficulty of writing code .

 Difficulty of debugging .
HYPER-THREADING TECHNOLOGY
 Hyper-Threading Technology(Intel technology used in the Pentium 4
processor family. ) boosts performance
 Allowing multiple threads of software applications to run on a single
processor at one time, sharing the same core processor resources.
 Hyper-Threading Technology is a form of simultaneous
multithreading technology(SMT) Multiple threads execute on a
single processor without switching
 A processor with Hyper-Threading Technology consists of two logical
processors, each of which has its own processor architectural state.
Each logical processor has a copy of architecture state.
 processors share a single set of physical execution resources.
 Each logical processor can respond to interrupts
independently.
 The first logical processor can track one software
thread while the second logical processor tracks
another software thread simultaneously.
 Because the two threads share one set of execution
resources.
 the second thread can use resources that would be
idle ,if only one thread were executing.
when we put a regular processor
under 100% load, we're never
fully utilizing 100% of the
execution units.

With a HyperThreading enabled


processor those spare execution
units can used towards
computing other things now.

HYPER-THREADING
In Superscalar processor, half
the processor remains unused.

•In the Multiprocessing


portion of the demonstration
we see a dual CPU system
working on two separate
threads.

•In Last HyperThreading


enabled processor, both
threads are simultaneously
being computed, and the
CPU's efficiency has
increased from around 50% to
over 90%!

•Dual HyperThreading
enabled processors which can
work on four independent
threads at the same time
RESOURCE UTILISATION
ROLE OF OPERATING SYSTEM IN
HYPERTHREADING
 Operating systems (including Microsoft Windows and Linux* ) divide
their workload up into processes and threads that can be independently
scheduled and dispatched to run on a processor.
 The operating system will also play a key role in how well
HyperThreading works as if they were in a multiprocessing system .
 The OS assigns operations to the independent logical processors and so if
it's determined that one of the logical CPU's is to remain idle, the OS will
issue a HALT command to the free logical processor thus devoting all of
the other system resources to the working logical processor.
 It allows you to use your computer without any knowledge of coding,
Without an operating system, your hardware would not work at all, until
you wrote your own code for the hardware to do .
ADVANTAGES OF HYPER-THREADING
 HyperThreading has the potential to significantly boost system performance under certain
circumstances.
 improved reaction and response time.
 Allowing multiple threads to run simultaneously
 No performance loss if only one thread is active. Increased performance with multiple threads

Disadvantages
 Increases the complexity of the application,
 Sharing of resources, such as global data, can introduce common parallel programming errors
such as storage conflicts and other race conditions. Debugging such problems is difficult as they
are non-deterministic.
 To take advantage of hyper-threading performance, serial execution can not be used.
 Threads are non-deterministic and involve extra design
 Threads have increased overhead.

You might also like