Unit 1: Algorithmic Complexity

Download as pps, pdf, or txt
Download as pps, pdf, or txt
You are on page 1of 17

Unit 1

ALGORITHMIC COMPLEXITY
1.1: Definition. Time Cost/Complexity

 ALGORITHM: A structured sequence of


instructions to do a certain job
 A precise statement to solve a problem on a
computer
 Our goal to be studied: Time Complexity of a
given algorithm.
Informally: how long does it take to be run in
a computer?
 It depends on the input data:
 How many values?

 How big are them?


Analysis of algorithms

 We are interested in measuring the


efficiency of a given algorithm
 We assume efficiency as a measure of the
time (how long?) and / or space (amount
of bytes?) required by the algorithm when
running to find a solution to a given
problem.
We are concerned with the former.
 Such efficiency can order the set of
algorithms that solve a given problem. We
want the most efficient, i.e., least cost
Time efficiency

 Quantitative value (in time units usually,


no matter whether they are real) that
computes the number of comparisons,
basic operations, function calls,… of the
time requiremnets of an algorithm.
 We are concerned with “a priori”
efficiency (clocks measure the true one)
 I.e: An estimation of Time Complexity
COMPLEXITY ORDER ~
~ Behaviour of an Algorithm

 An algorithm’s (time) behaviour is


defined by its time complexity /
efficiency; just efficiency from now on.
(regardless constants).
 Invariance Principle for algorithms:
“Given an algorithm and a couple of
implementations of it, I1, I2 which take
T1(n), T2(n) seconds to be run, then, there
exists a constant c>0 and a Natural n’ such
that for all n>n’, T1(n)<cT2(n) holds”
Size of a problem

 Variable, parameter or function, over which the time


complexity of an algorithm has to be computed.
 It usually is related with the number or size of INPUT
data
 Number of elements to be ordered

 Number of rows or columns or total elements in

matrices
 The biggest value to be computed

 Always has to be the same for the algorithms we

want to analyse and therefore compare.


 …
Complexity Function I

 It is a mathematical function.
 A size of the problem dependant function
which measures the time complexity of the
algorithm.
 We are concerned with the behaviour in the
limit, i.e. when such size of the problem
grows.
 This is called Asymptotic efficiency
Complexity Function II

 It can depend on the INPUT data state.


 “Finding a a value among an ordered or non-
ordered set”
 There are three functions:
 Best case fb(n) (fastest)
 Worst case fw(n) (slowest). Default one.
 Average case fa(n): Probabilistic issues have to be
taken into account to properly compute it.
Too expensive to compute.
~ fw(n)
Complexity Function => BEHAVIOUR

 Just interested in studying Behaviour of an


algorithm.
 Some usual complexity functions includes
(Complexity Orders) :
 1, log n, n, n·log n, n2, n3, 2n, 3n, ...
 We then refer it as Behaviours, that are so
ordered:
 Ord(1) < Ord(log n) < Ord(n) < Ord(n·log n) <
< Ord(n2) < Ord(2n) < Ord(n·2n) < Ord(3n)
Tractable Problems

 Edmonds law:
 Tractable problem : Polynomial Complexity
 Non-tractable problem: Exponential Complexity
 Algorithms which Complexity are
greater than (n·log n) are almost
useless => Better find a “cheaper” one.
 Exponentials are only useful as a matter
of theoretical examples.
1.2: Asymptotic Complexity Orders
Behaviours: O, , 
 These asymptotic orders, summarize the
behaviour of a given algorithm.
They range over the time taken by the
algorithm when the size of the problem grows
unbounded.
 This is the key fact to be taken into account
when we want to compare the efficiency of
two algorithms solving the same problem.
 They are valid even for midsized instances
Asymptotic Orders
Big O notation
 Given a function f, we want to refer those
functions that at most grows as f does. The
set of these functions is the upper bound of f,
and it is written as O(f).
 Once this value is known, we can guarantee
that always the algorithm can be run within
the time upper bounded by this value.
Formally:
 g(n)O(f(n)) if  c>0, n0 / g(n)  c·f(n),nn0
Graphically
Big O notation

cf(n)

g(n)

n0
Example
Big O notation

If the map of g is upper bounded by f then g’s
efficiency is better or equal than that of f.
 Constants must be discarded regarding behaviour
issues.
 O(f(n)): Set of all the functions which are upper
bounded by f (g belongs to this set).
 Example:
 P(n) = am·nm + ... + a1·n + a0 (polynomial on “n”)
 P(n)  O(nm)
Asymptotic Orders
Properties of Big O notation
1. g(n) Є O(g(n))
2. O(c·g(n)) = O(g(n)) (c is constant)
 Ex : O(2·n2) = O(n2)
3. O(g(n)+h(n))= max{O(g(n)),O(h(n))}
 Ex: O(n+n2) = O(n2)
4. O(g(n)-h(n))=max{O(g(n)),O(h(n))}
 Ex: O(n-n2) = O(n2)
Big O notation

 The big O notation is used when searching


for upper bounds (should be interested in the
smallest) for the behaviour of a complexity
function in the worst case.
 Notice:
 If f(n) Є O(n)  f(n) Є O(n2) ; f(n) Є O(n3)…
 Immediate, but the least of them is always
preferable against the rest.
Asymptotic Orders

Lower Bound:  Notation


Exact Bound: θ Notation

You might also like