Algorithms in Python
Algorithms in Python
Contents
I
1 Numbers
1.1 Integers . . . . . . . . .
1.2 Floats . . . . . . . . . .
1.3 Complex Numbers . . .
1.4 The fractions Module
1.5 The decimal Module . .
1.6 Other Representations .
1.7 Additional Exercises . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
12
13
14
15
15
16
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
27
33
35
43
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
45
49
54
58
.
.
.
.
.
.
63
63
66
72
79
81
83
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
4.7
Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Object-Oriented Design
5.1 Classes and Objects . .
5.2 Principles of OOP . . .
5.3 Python Design Patterns
5.4 Additional Exercises . .
II
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
86
89
90
91
94
96
99
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
101
101
104
108
110
114
120
7 Asymptotic Analysis
133
7.1 Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . . 133
7.2 Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.3 Runtime in Functions . . . . . . . . . . . . . . . . . . . . . . 136
8 Sorting
8.1 Quadratic Sort . . .
8.2 Linear Sort . . . . .
8.3 Loglinear Sort . . . .
8.4 Comparison Between
8.5 Additional Exercises
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
Sorting Methods
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
139
. 139
. 142
. 142
. 148
. 149
9 Searching
153
9.1 Sequential Search . . . . . . . . . . . . . . . . . . . . . . . . . 153
9.2 Binary Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
9.3 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . 156
10 Dynamic Programming
163
10.1 Memoization . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
10.2 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . 165
CONTENTS
III
169
11 Introduction to Graphs
171
11.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 171
11.2 The Neighborhood Function . . . . . . . . . . . . . . . . . . . 173
11.3 Introduction to Trees . . . . . . . . . . . . . . . . . . . . . . . 176
12 Binary Trees
12.1 Basic Concepts . . . . . .
12.2 Representing Binary Trees
12.3 Binary Search Trees . . .
12.4 Self-Balancing BST . . . .
12.5 Additional Exercises . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
179
179
179
183
186
193
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
207
207
208
209
211
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
Part I
Chapter 1
Numbers
When you learn a new language, the first thing you usually do (after our
dear hello world) is to play with some arithmetic operations. Numbers
can be integers, float point number, or complex. They are usually given
decimal representation but can be represented in any bases such as binary,
hexadecimal, octahedral. In this section we will learn how Python deals
with numbers.
1.1
Integers
Python represents integers (positive and negative whole numbers) using the
int (immutable) type. For immutable objects, there is no difference between
a variable and an object difference.
The size of Pythons integers is limited only by the machine memory, not
by a fixed number of bytes (the range depends on the C or Java compiler
that Python was built with). Usually plain integers are at least 32-bit long
(4 bytes)1 .To see how many bytes a integer needs to be represented, starting
in Python 3.1, the int.bit length() method is available:
>>> (999).bit_length()
10
11
12
CHAPTER 1. NUMBERS
>>>
>>>
11
>>>
>>>
3
d = int(s)
print(d)
b = int(s, 2)
print(b)
1.2
Floats
Comparing Floats
We should never compare floats for equality nor subtract them. The reason
for this is that floats are represented in binary fractions and there are many
numbers that are exact in a decimal base but not exact in a binary base (for
example, the decimal 0.1). Equality tests should instead be done in terms
of some predefined precision. For example, we can use the same approach
that Pythons unittest module has with assert AlmostEqual:
>>> def a(x , y, places=7):
...
return round(abs(x-y), places) == 0
13
The method as integer ratio() gives the integer fractional representation of a float:
>>> 2.75.as_integer_ratio()
(11, 4)
1.3
Complex Numbers
The complex data type is an immutable type that holds a pair of floats:
z = 3 + 4j, with methods such as: z.real, z.imag, and z.conjugate().
Complex numbers are imported from the cmath module, which provides
complex number versions of most of the trigonometric and logarithmic functions that are in the math module, plus some complex number-specific functions such: cmath.phase(), cmath.polar(), cmath.rect(), cmath.pi, and
cmath.e.
14
CHAPTER 1. NUMBERS
1.4
Python has the fraction module to deal with parts of a fraction. For
instance, the following snippet shows the basics methods of this module:4
[general_problems/numbers/testing_floats.py]
from fractions import Fraction
def rounding_floats(number1, places):
some operations with float()
return round(number1, places)
def float_to_fractions(number):
return Fraction(*number.as_integer_ratio())
def get_denominator(number1, number2):
a = Fraction(number1, number2)
return a.denominator
def get_numerator(number1, number2):
a = Fraction(number1, number2)
return a.numerator
def test_testing_floats(module_name=this module):
number1 = 1.25
number2 = 1
number3 = -1
number4 = 5/4
number6 = 6
assert(rounding_floats(number1, number2) == 1.2)
assert(rounding_floats(number1*10, number3) == 10)
assert(float_to_fractions(number1) == number4)
assert(get_denominator(number2, number6) == number6)
assert(get_numerator(number2, number6) == number2)
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
4
All the codes shown in this book show a directory structure of where you can find it
in my git repository. Also notice that, when you write your own codes, that the PEP 8
(Python Enhancement Proposal) guidelines recommend four spaces per level of indentation, and only spaces (no tabs). This is not explicit here because of the way Latex format
the text.
15
test_testing_floats()
1.5
When we need exact decimal floating-point numbers, Python has an additional immutable float type, the decimal.Decimal. This method can take
any integer or even a string as argument (and starting from Python 3.1,
also floats, with the decimal.Decimal.from float() function). This an
efficient alternative when we do not want to deal with the rounding, equality, and subtraction problems that floats have:
>>> sum (0.1 for i in range(10)) == 1.0
False
>>> from decimal import Decimal
>>> sum (Decimal ("0.1") for i in range(10)) == Decimal("1.0")
True
While The math and cmath modules are not suitable for the decimal
module, its built-in functions such as decimal.Decimal.exp(x) are enough
to most of the problems.
1.6
Other Representations
16
1.7
CHAPTER 1. NUMBERS
Additional Exercises
By swapping all the occurrences of 10 with any other base in our previous
method we can create a function that converts from a decimal number to
another number (2 base 10):
[general_problems/numbers/convert_from_decimal.py]
def convert_from_decimal(number, base):
multiplier, result = 1, 0
while number > 0:
result += number%base*multiplier
multiplier *= 10
number = number//base
return result
def test_convert_from_decimal():
number, base = 9, 2
assert(convert_from_decimal(number, base) == 1001)
print(Tests passed!)
17
if __name__ == __main__:
test_convert_from_decimal()
18
CHAPTER 1. NUMBERS
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
test_convert_dec_to_any_base_rec()
19
print(random.choice(values))
print(random.choice(values))
print(random.choice(values))
print(random.sample(values, 2))
print(random.sample(values, 3))
shuffle in place
random.shuffle(values)
print(values)
create random integers
print(random.randint(0,10))
print(random.randint(0,10))
if __name__ == __main__:
testing_random()
Fibonacci Sequences
The module bellow shows how to find the nth number in a Fibonacci sequence
in three ways: (a) with a recursive O(2n ) runtime; (b) with a iterative O(n2 )
runtime; and (c) using a formula that gives a O(1) runtime but is not precise
after around the 70th element:
[general_problems/numbers/find_fibonacci_seq.py]
import math
def find_fibonacci_seq_rec(n):
if n < 2: return n
return find_fibonacci_seq_rec(n - 1) + find_fibonacci_seq_rec(n
- 2)
def find_fibonacci_seq_iter(n):
if n < 2: return n
a, b = 0, 1
for i in range(n):
a, b = b, a + b
return a
def find_fibonacci_seq_form(n):
20
CHAPTER 1. NUMBERS
sq5 = math.sqrt(5)
phi = (1 + sq5) / 2
return int(math.floor(phi ** n / sq5))
def test_find_fib():
n = 10
assert(find_fibonacci_seq_rec(n) == 55)
assert(find_fibonacci_seq_iter(n) == 55)
assert(find_fibonacci_seq_form(n) == 55)
print(Tests passed!)
if __name__ == __main__:
test_find_fib()
Primes
The following program finds whether a number is a prime in three ways:
(a) brute force; (b) rejecting all the candidates up to the square root of the
number; and (c) using the Fermats theorem with probabilistic tests:
[general_problems/numbers/finding_if_prime.py]
import math
import random
def finding_prime(number):
num = abs(number)
if num < 4 : return True
for x in range(2, num):
if num % x == 0:
return False
return True
def finding_prime_sqrt(number):
num = abs(number)
if num < 4 : return True
for x in range(2, int(math.sqrt(num)) + 1):
if number % x == 0:
return False
return True
21
def finding_prime_fermat(number):
if number <= 102:
for a in range(2, number):
if pow(a, number- 1, number) != 1:
return False
return True
else:
for i in range(100):
a = random.randint(2, number - 1)
if pow(a, number - 1, number) != 1:
return False
return True
def test_finding_prime():
number1 = 17
number2 = 20
assert(finding_prime(number1) == True)
assert(finding_prime(number2) == False)
assert(finding_prime_sqrt(number1) == True)
assert(finding_prime_sqrt(number2) == False)
assert(finding_prime_fermat(number1) == True)
assert(finding_prime_fermat(number2) == False)
print(Tests passed!)
if __name__ == __main__:
test_finding_prime()
22
CHAPTER 1. NUMBERS
if __name__ == __main__:
if len(sys.argv) < 2:
print ("Usage: generate_prime.py number")
sys.exit()
else:
number = int(sys.argv[1])
print(generate_prime(number))
23
print(np.cos(ax))
print(ax-ay)
print(np.where(ax<2, ax, 10))
m = np.matrix([ax, ay, ax])
print(m)
print(m.T)
grid1 = np.zeros(shape=(10,10), dtype=float)
grid2 = np.ones(shape=(10,10), dtype=float)
print(grid1)
print(grid2)
print(grid1[1]+10)
print(grid2[:,2]*2)
if __name__ == __main__:
testing_numpy()
NumPy arrays are also much more efficient than Pythons lists, as we
can see in the benchmark tests below:
[general_problems/numbers/testing_numpy_speed.py]
import numpy
import time
def trad_version():
t1 = time.time()
X = range(10000000)
Y = range(10000000)
Z = []
for i in range(len(X)):
Z.append(X[i] + Y[i])
return time.time() - t1
def numpy_version():
t1 = time.time()
X = numpy.arange(10000000)
Y = numpy.arange(10000000)
Z = X + Y
return time.time() - t1
if __name__ == __main__:
print(trad_version())
print(numpy_version())
24
Results:
3.23564291
0.0714290142059
CHAPTER 1. NUMBERS
Chapter 2
25
26
Mutability
Another propriety that any data type holds is mutability. Numbers are
obviously immutable; however, when it comes to sequence types, we can have
mutable types too. For instance, tuple, strings, and bytes are immutable,
while lists and byte arrays are mutable. Immutable types are more efficient
than mutable and some collection data types2 can only work with immutable
data types.
Since any variable is an object reference in Python, copying mutable
objects can be tricky. When you say a = b you are actually pointing a to
where b points. Therefore, to make a deep copy in Python you need to use
special procedures:
To make a copy of a list:
>>> newList = myList[:]
>>> newList2 = list(myList2)
To make a copy of a set (we will see in the next chapter), use:
>>> people = {"Buffy", "Angel", "Giles"}
2
Collection data types are the subject in the next chapter, and it includes, for example,
sets and dictionaries.
2.1. STRINGS
27
2.1
Strings
Python represents strings, i.e. a sequence of characters, using the immutable str type. In Python, all objects have two output forms: while
string forms are designed to be human-readable, representational forms are
designed to produce an output that if fed to a Python interpreter, reproduces the represented object. In the future, when we write our own classes,
it will be important to defined the string representation of our our objects.
Unicode Strings
Pythons Unicode encoding is used to include a special characters in the
string (for example, whitespace). Starting from Python 3, all strings are
now Unicode, not just plain bytes. To create a Unicode string, we use the
u prefix:
>>> uGoodbye\u0020World !
Goodbye World !
In the example above, the escape sequence indicates the Unicode character with the ordinal value 0x0020. It is also useful to remember that in
general ASCII representations are given by only 8-bits while the Unicode
representation needs 16-bits.
28
2.1. STRINGS
29
From Python 3.1 it is possible to omit field names, in which case Python
will in effect put them in for us, using numbers starting from 0. For example:
>>> "{} {} {}".format("Python", "can", "count")
Python can count
However, using the operator + would allow a more concise style here. This
method allows three specifiers: s to force string form, r to force representational form, and a to force representational form but only using ASCII
characters:
>>> import decimal
>>> "{0} {0!s} {0!r} {0!a}".format(decimal.Decimal("99.9"))
"99.9 99.9 Decimal(99.9) Decimal(99.9)"
30
We can use split() to write our own method for erasing spaces from
strings:
>>> def erase_space_from_string(string):
...
s1 = string.split(" ")
...
s2 = "".join(s1)
...
return s2
The program bellow uses strip() to list every word and the number of
the times they occur in alphabetical order for some file:3
[general_problems/strings/count_unique_words.py]
import string
import sys
3
2.1. STRINGS
31
def count_unique_word():
words = {} # create an empty dictionary
strip = string.whitespace + string.punctuation + string.digits +
"\""
for filename in sys.argv[1:]:
with open(filename) as file:
for line in file:
for word in line.lower().split():
word = word.strip(strip)
if len(word) > 2:
words[word] = words.get(word,0) +1
for word in sorted(words):
print("{0} occurs {1} times.".format(word, words[word]))
Similar methods are: lstrip(), which return a copy of the string with
all whitespace at the beginning of the string stripped away; and rstrip(),
which returns a copy of the string with all whitespace at the end of the
string stripped away.
32
An adaptation of the previous methods are: rfind(string), which returns the index within the string of the last (from the right) occurrence of
string; and rindex(string), which returns the index within the string of
the last (from the right) occurrence of string, causing an error if it cannot
be found.
The count(t, start, end) Method:
Returns the number of occurrences of the string t in the string s:
>>> slayer = "Buffy is Buffy is Buffy"
>>> slayer.count("Buffy", 0, -1)
2
>>> slayer.count("Buffy")
3
2.2. TUPLES
33
2.2
Tuples
34
>>> t = 1, 5, 7
>>> t.index(5)
1
Tuple Unpacking
In Python, any iterable can be unpacked using the sequence unpacking operator, *. When used with two or more variables on the left-hand side of an
assignment, one of which preceded by *, items are assigned to the variables,
with all those left over assigned to the starred variable:
>>>
>>>
1
>>>
[2,
x, *y = (1, 2, 3, 4)
x
y
3, 4]
Named Tuples
Pythons package collections4 contains a sequence data type called named
tuple. This behaves just like the built-in tuple, with the same performance
characteristics, but it also carries the ability to refer to items in the tuple
by name as well as by index position. This allows the creation of aggregates
of data items:
>>> import collections
>>> MonsterTuple = collections.namedtuple("Monsters","name age
power")
>>> MonsterTuple = (Vampire, 230, immortal)
>>> MonsterTuple
(Vampire, 230, immortal)
2.3. LISTS
35
[general_problems/tuples/namedtuple_example.py]
from collections import namedtuple
def namedtuple_example():
show an example for named tuples
>>> namedtuple_example()
slayer
2.3
Lists
In computer science, arrays are a very simple data structure where elements
are sequentially stored in continued memory and linked lists are structures
where several separated nodes link to each other. Iterating over the contents
of the data structure is equally efficient for both kinds, but directly accessing
an element at a given index has O(1) (complexity) runtime5 in an array,
while it is O(n) in a linked list with n nodes (where you would have to
transverse the list from the beginning). Furthermore, in a linked list, once
you know where you want to insert something, insertion is O(1), no matter
how many elements the list has. For arrays, an insertion would have to move
all elements that are to the right of the insertion point or moving all the
elements to a larger array if needed, being then O(n).
In Python, the closest object to an array is a list, which is a dynamic resizing array and it does not have anything to do with linked lists. Why mention linked lists? Linked lists are a very important abstract data structure
(we will see more about them in a following chapter) and it is fundamental
to understand what makes it so different from arrays (or Pythons lists) for
when we need to select the right data structure for a specific problem.
5
The Big-O notation is a key to understand algorithms! We will learn more about this
in the following chapters and use the concept extensively in our studies. For now just keep
in mine that O(1) times O(n) O(n2 ), etc...
36
q = [2, 3]
p = [1, q, 4]
p[1].append("buffy")
p
[2, 3, buffy], 4]
q
3, buffy]
q
3, buffy]
To insert items, lists perform best (O(1)) when items are added or removed at the end, using the methods append() and pop(), respectively. The
worst performance (O(n)) occurs when we perform operations that need to
search for items in the list, for example, using remove() or index(), or
using in for membership testing.6
If fast searching or membership testing is required, a collection type such
as a set or a dictionary may be a more suitable choice (as we will see in the
next chapter). Alternatively, lists can provide fast searching if they are kept
in order by being sorted (we will see searching methods that perform on
O(log n) for sorted sequences, particular the binary search, in the following
chapters).
2.3. LISTS
37
38
>>> people.remove("Buffy")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: list.remove(x): x not in list
a = [-1, 4, 5, 7, 10]
del a[0]
a
5, 7, 10]
del a[2:3]
a
5, 10]
del a
# also used to delete entire variable
Garbage is a memory occupied by objects that are no longer referenced and garbage
collection is a form of automatic memory management, freeing the memory occupied by
the garbage.
2.3. LISTS
List Unpacking
Similar to tuple unpacking:
39
40
>>>
>>>
1
>>>
[2,
Python also has a related concept called starred arguments, that can be
used as a passing argument for a function:
>>>
...
>>>
>>>
24
>>>
24
List Comprehensions
A list comprehension is an expression and loop (with an optional condition)
enclosed in brackets:8
[item for item in iterable]
[expression for item in iterable]
[expression for item in iterable if condition]
The Google Python Style guide endorses list comprehensions and generator expressions saying that they provide a concise and efficient way to create lists and iterators
without resorting to the use of map(), filter(), or lambda.
2.3. LISTS
41
>>> d
[3.1, 3.14, 3.142, 3.1416, 3.14159]
>>> words = Buffy is awesome and a vampire slayer.split()
>>> e = [[w.upper(), w.lower(), len(w)] for w in words]
>>> for i in e:
...
print(i)
...
[BUFFY, buffy, 5]
[IS, is, 2]
[AWESOME, awesome, 7]
[AND, and, 3]
[A, a, 1]
[VAMPIRE, vampire, 7]
[SLAYER, slayer, 6]
42
return ((x, y, z)
for x in xrange(5)
for y in xrange(5)
if x != y
for z in xrange(5)
if y != z)
43
2.4
Big-O Efficiency
O(1)
O(1)
O(1)
O(1)
O(n)
O(n)
O(n)
O(n)
O(n)
O(k)
O(n)
O(n+k)
O(n)
O(k)
O(n log n)
O(nk)
Python provides two data types for handling raw bytes: bytes which is
immutable, and bytearray, which is mutable. Both types hold a sequence
of zero of more 8-bit unsigned integers in the range 0 ... 255. The byte type
is very similar to the string type and the bytearray provides mutating
methods similar to lists.
44
Chapter 3
3.1
Sets
In Python, a Set is an unordered collection data type that is iterable, mutable, and has no duplicate elements. Sets are used for membership testing
and eliminating duplicate entries. Sets have O(1) insertion, so the runtime
of union is O(m + n). For intersection, it is only necessary to transverse the
smaller set, so the runtime is O(n). 1
1
Pythons collection package has supporting for Ordered sets. This data type enforces
some predefined comparison for their members.
45
46
Frozen Sets
Frozen sets are immutable objects that only support methods and operators that produce a result without affecting the frozen set or sets to which
they are applied.
3.1. SETS
47
48
if __name__ == __main__:
test_sets_operations_with_lists()
3.2. DICTIONARIES
49
3.2
Dictionaries
Dictionaries in Python are implemented using hash tables. Hashing functions compute some random integer value from an arbitrary object in constant time, that can be used as an index into an array:
>>> hash(42)
42
>>> hash("hello")
355070280260770553
50
def setdefault_dict(dict_data):
newdata = {}
for k, v in dict_data:
newdata.setdefault(k, []).append(v)
return newdata
3.2. DICTIONARIES
51
(key2, value4),
(key2, value5),)
print(usual_dict(dict_data))
print(setdefault_dict(dict_data))
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
test_setdef()
52
""" There
10000,
30000,
50000,
70000,
90000,
results are:
0.192,
0.002
0.600,
0.002
1.000,
0.002
1.348,
0.002
1.755,
0.002
3.2. DICTIONARIES
110000,
130000,
150000,
170000,
190000,
210000,
230000,
250000,
270000,
290000,
310000,
2.194,
2.635,
2.951,
3.405,
3.743,
4.142,
4.577,
4.797,
5.371,
5.690,
5.977,
53
0.002
0.002
0.002
0.002
0.002
0.002
0.002
0.002
0.002
0.002
0.002
So we can see the linear tile for lists, and constant for dict!
Big-O Efficiency of Python Dictionary Operations
Operation
Big-O Efficiency
copy
O(n)
get item
O(1)
set item
O(1)
delete item
O(1)
contains (in)
O(1)
iteration
O(n)
"""
54
Dictionaries also support reverse iteration using reversed(). In addition, it is good to note that the Google Python Style guide advices that
default iterators should be used for types that support them:
[Good] for key in adict: ...
if key not in adict: ...
[Bad]
can be reduced to
functions = dict(a=add_to_dict, e=edit_dict,...)
functions[actions](db)
3.3
Default Dictionaries
Default dictionaries are an additional unordered mapping type provide
by Pythons collections.defaultdict. They have all the operators and
methods that a built-in dictionary provide, but they also gracefully handle
missing keys:
[general_examples/dicts/defaultdict_example.py]
55
Ordered Dictionaries
Ordered dictionaries are an ordered mapping type provided by Pythons
collections.OrderedDict. They have all the methods and properties of a
built-in dict, but in addition they store items in the insertion order:
[general_examples/dicts/OrderedDict_example.py]
from collections import OrderedDict
pairs = [(a, 1), (b,2), (c,3)]
d1 = {}
for key, value in pairs:
if key not in d1:
d1[key] = []
d1[key].append(value)
for key in d1:
print(key, d1[key])
d2 = OrderedDict(pairs)
for key in d2:
56
if __name__ == __main__:
OrderedDict_example()
"""
a [1]
c [3]
b [2]
a 1
b 2
c 3
"""
Counter Dictionaries
A specialised Counter type (subclass for counting hashable objects) is provided by Pythons collections.Counter:
[general_examples/dicts/Counter_example.py]
from collections import Counter
3
In computer science, FIFO means first-in first-out. Pythons lists append and pop
items by the end so they are LIFO, last-in, last-out.
def Counter_example():
show some examples for Counter
it is a dictionary that maps the items to the number of
occurrences
seq1 = [1, 2, 3, 5, 1, 2, 5, 5, 2, 5, 1, 4]
seq_counts = Counter(seq1)
print(seq_counts)
we can increment manually or use the update() method
seq2 = [1, 2, 3]
seq_counts.update(seq2)
print(seq_counts)
seq3 = [1, 4, 3]
for key in seq3:
seq_counts[key] += 1
print(seq_counts)
also, we can use set operations such as a-b or a+b
seq_counts_2 = Counter(seq3)
print(seq_counts_2)
print(seq_counts + seq_counts_2)
print(seq_counts - seq_counts_2)
if __name__ == __main__:
Counter_example()
"""
Counter({5:
Counter({1:
Counter({1:
Counter({1:
Counter({1:
Counter({1:
"""
4,
4,
5,
1,
6,
4,
1:
2:
2:
3:
2:
2:
3,
4,
4,
1,
4,
4,
2:
5:
5:
4:
3:
5:
3, 3:
4, 3:
4, 3:
1})
4, 5:
4, 3:
1, 4: 1})
2, 4: 1})
3, 4: 2})
4, 4: 3})
2, 4: 1})
57
58
3.4
Additional Exercises
if __name__ == __main__:
test_find_top_N_recurring_words()
59
Anagrams
The following program finds whether two words are anagrams. Since sets
do not count occurrence, and sorting a list is O(n log n), hash tables can
be the best solution in this case. The procedure we use is: we scan the
first string and add all the character occurrences. Then we scan the second
string, decreasing all the character occurrences. In the end, if all the entries
are zero, the string is an anagram:
[general_problems/dicts/verify_two_strings_are_anagrams.py]
def verify_two_strings_are_anagrams(str1, str2):
ana_table = {key:0 for key in string.ascii_lowercase}
for i in str1:
ana_table[i] += 1
for i in str2:
ana_table[i] -= 1
# verify whether all the entries are 0
if len(set(ana_table.values())) < 2: return True
else: return False
def test_verify_two_strings_are_anagrams():
str1 = marina
str2 = aniram
assert(verify_two_strings_are_anagrams(str1, str2) == True)
str1 = google
str2 = gouglo
assert(verify_two_strings_are_anagrams(str1, str2) == False)
print(Tests passed!)
60
if __name__ == __main__:
test_verify_two_strings_are_anagrams()
Another way to find whether two words are anagrams is using the hashing
functions proprieties, where every different amount of characters should
give a different result. In the following program, ord() returns an integer
representing the Unicode code point of the character when the argument is
a unicode object, or the value of the byte when the argument is an 8-bit
string:
[general_problems/dicts/find_anagram_hash_function.py]
def hash_func(astring, tablesize):
sum = 0
for pos in range(len(astring)):
sum = sum + ord(astring[pos])
return sum%tablesize
def find_anagram_hash_function(word1, word2):
tablesize = 11
return hash_func(word1, tablesize) == hash_func(word2,
tablesize)
61
Sums of Paths
The following program uses two different dictionary containers to determine
the number of ways two dices can sum to a certain value:
[general_problems/dicts/find_dice_probabilities.py]
from collections import Counter, defaultdict
def find_dice_probabilities(S, n_faces=6):
if S > 2*n_faces or S < 2: return None
cdict = Counter()
ddict = defaultdict(list)
for dice1 in range(1, n_faces+1):
for dice2 in range(1, n_faces+1):
t = [dice1, dice2]
cdict[dice1+dice2] += 1
ddict[dice1+dice2].append( t)
return [cdict[S], ddict[S]]
def test_find_dice_probabilities(module_name=this module):
n_faces = 6
S = 5
results = find_dice_probabilities(S, n_faces)
print(results)
assert(results[0] == len(results[1]))
if __name__ == __main__:
test_find_dice_probabilities()
Finding Duplicates
The program below uses dictionaries to find and delete all the duplicate
characters in a string:
[general_problems/dicts/delete_duplicate_char_str.py]
import string
def delete_unique_word(str1):
table_c = { key : 0 for key in string.ascii_lowercase}
62
def test_delete_unique_word():
str1 = "google"
assert(delete_unique_word(str1) == le)
print(Tests passed!)
if __name__ == __main__:
test_delete_unique_word()
Chapter 4
Modules in Python
In Python, modules are defined using the built-in name def. When def is
executed, a function object is created together with its object reference. If
we do not define a return value, Python automatically returns None (like
in C, we call the function a procedure when it does not return a value).
An activation record happens every time we invoke a method: information is put in the stack to support invocation. Activation records process in
the following order:
Activation Records
1. the actual parameters of the method are pushed onto the stack,
2. the return address is pushed onto the stack,
3. the top-of-stack index is incremented by the total amount required by
the local variables within the method,
4. a jump to the method.
The process of unwinding an activation record happens in the following
order:
1. the top-of-stack index is decremented by the total amount of memory
consumed,
63
64
The
In the simplest case, it can just be an empty file, but it can also execute
initialization code for the package or set the all variable: init .py
to:
__all__ = ["file1", ...]
means importing every object in the module, except those whose names
begin with , or if the module has a global all variable, the list in it.
65
The
name
Variable
will not be executed. In the other hand, if we run the .py file directly,
Python sets name to main , and every instruction following the above
statement will be executed.
66
The variables sys.ps1 and sys.ps2 define the strings used as primary
and secondary prompts. The variable sys.argv allows us to use the arguments passed in the command line inside our programs:
import sys
def main():
print command line arguments
for arg in sys.argv[1:]:
print arg
if __name__ == "__main__":
main()
The built-in method dir() is used to find which names a module defines
(all types of names: variables, modules, functions). It returns a sorted list
of strings:
>>> import sys
>>> dir(sys)
[ __name__ ,
argv , builtin_module_names ,
copyright ,
exit ,
maxint ,
modules ,
path ,
ps1 ,
ps2 ,
setprofile ,
settrace ,
stderr ,
stdin ,
stdout ,
version ]
It does not list the names of built-in functions and variables. Therefore,
we can see that dir() is useful to find all the methods or attributes of an
object.
4.2
Control Flow
if
The if statement substitutes the switch or case statements in other languages:1
>>> x = int(input("Please enter a number: "))
>>> if x < 0:
...
x = 0
...
print "Negative changed to zero"
1
Note that colons are used with else, elif, and in any other place where a suite is to
follow.
67
>>> elif x == 0:
...
print "Zero"
>>> elif x == 1:
...
print "Single"
>>> else:
...
print "More"
for
The for statement in Python differs from C or Pascal. Rather than always
iterating over an arithmetic progression of numbers (like in Pascal), or giving
the user the ability to define both the iteration step and halting condition
(as C), Pythons for statement iterates over the items of any sequence (e.g.,
a list or a string), in the order that they appear in the sequence:
>>> a = ["buffy", "willow", "xander", "giles"]
>>> for i in range(len(a)):
...
print(a[i])
buffy
willow
xander
giles
The Google Python Style guide sets the following rules for using implicit
False in Python:
? Never use == or != to compare singletons, such as the built-in variable
None. Use is or is not instead.
? Beware of writing if x: when you really mean if x is not None.
68
[Good]
if not users: print no users
if foo == 0: self.handle_zero()
if i % 10 == 0: self.handle_multiple_of_ten()
[Bad]
if len(users) == 0: print no users
if foo is not None and not foo: self.handle_zero()
if not i % 10: self.handle_multiple_of_ten()
69
Generators are very robust and efficient and they should considered every
time you deal with a function that returns a sequence or creates a loop. For
example, the following program implements a Fibonacci sequence using the
iterator paradigm:
def fib_generator():
a, b = 0, 1
while True:
yield b
a, b = b, a+b
if __name__ == __main__:
fib = fib_generator()
print(next(fib))
print(next(fib))
print(next(fib))
print(next(fib))
70
[0,
>>>
[4,
>>>
[0,
4, 5, 6, 7, 8, 9, 10]
10)
8, 9]
10, 3)
71
72
4.3
import collections
minus_one_dict = collections.defaultdict(lambda: -1)
point_zero_dict = collections.defaultdict(lambda: (0, 0))
message_dict = collections.defaultdict(lambda: "No message")
File Handling
File handling is very easy in Python. For example, the program below shows
an program that reads a file and delete all the blank lines:
[general_problems/modules/remove_blank_lines.py]
import os
import sys
def read_data(filename):
lines = []
fh = None
try:
fh = open(filename)
for line in fh:
if line.strip():
lines.append(line)
except (IOError, OSError) as err:
print(err)
finally:
if fh is not None:
fh.close()
return lines
def write_data(lines, filename):
fh = None
try:
fh = open(filename, "w")
73
74
of the file will be read and returned. If the end of the file has been reached,
read() will return an empty string:
>>> f.read()
This is the entire file.\n
>>> f.read()
75
76
[general_problems/files/change_ext_file.py]
import os
import sys
import shutil
def change_file_ext():
if len(sys.argv) < 2:
print("Usage: change_ext.py filename.old_ext new_ext")
sys.exit()
name = os.path.splitext(sys.argv[1])[0] + "." + sys.argv[2]
print (name)
try:
shutil.copyfile(sys.argv[1], name)
except OSError as err:
print (err)
if __name__ == __main__:
change_file_ext()
77
[general_problems/files/export_pickle.py]
import pickle
def export_pickle(data, filename=test.dat, compress=False):
fh = None
try:
if compress:
fh = gzip.open(filename, "wb") # write binary
else:
fh = open(filename, "wb") # compact binary pickle format
pickle.dump(data, fh, pickle.HIGHEST_PROTOCOL)
except(EnvironmentError, pickle.PickingError) as err:
print("{0}: export error:
{1}".format(os.path.basename(sys.argv[0], err)))
return False
finally:
if fh is not None:
fh.close()
def test_export_pickle():
mydict = {a: 1, b: 2, c: 3}
export_pickle(mydict)
if __name__ == __main__:
test_export_pickle()
In general, booleans, numbers, and strings, can be pickled as can instances of classes and built-in collection types (providing they contain only
pickable objects, .e., their dict is pickable).
Reading Data with Pickle
The example below shows how to read a pickled data:
[general_problems/files/import.py]
import pickle
def import_pickle(filename):
fh = None
try:
78
def test_import_pickle():
pkl_file = test.dat
mydict = import_pickle(pkl_file)
print(mydict)
if __name__ == __main__:
test_import_pickle()
4.4
79
80
The queue.queue class can handle all the locking internally: we can rely
on it to serialize accesses, meaning that only one thread at time has access
to the data (FIFO). The program will not terminate while it has any threads
running.
It might create a problem since once the worker threads have done their
work, they are finished but they are technically still running. The solution is to transform threads into daemons. In this case, the program will
terminate as soon as there is no daemon threads running. The method
queue.queue.join() blocks the end until the queue is empty.
81
4.5
Handling Exceptions
When an exception is raised and not handled, Python outputs a traceback
along with the exceptions error message. A traceback (sometimes called a
backtrace) is a list of all the calls made from the point where the unhandled
exception occurred back to the top of the call stack.
We can handle predictable exceptions by using the try-except-finally
paradigm:
try:
try_suite
except exception1 as variable1:
exception_suite1
...
except exceptionN as variableN:
exception_suiteN
If the statements in the try blocks suite are all executed without raising
an exception, the except blocks are skipped. If an exception is raised inside
the try block, control is immediately passed to the suite corresponding to
the first matching exception. This means that any statements in the suite
that follow the one that caused the exception will not be executed:
while 1:
try:
82
The raise statement allows the programmer to force a specified exception to occur:
import string
import sys
try:
f = open(myfile.txt)
s = f.readline()
i = int(string.strip(s))
except IOError, (errno, strerror):
print "I/O error(%s): %s" % (errno, strerror)
except ValueError:
print "Could not convert data to an integer."
except:
print "Unexpected error:", sys.exc_info()[0]
raise
83
class Error(Exception):
pass
4.6
Debugging a Code
Interactive Running:
If you have some code in a source file and you want to explore it interactively,
you can run Python with the -i switch, like this: python -i example.py.
pdb:
The debugger pdb can be used in the command line:
>>> python3 -m pdb program.py
84
import pdb
pdb.set_trace()
To perform the inspection, type: s for step, p for print, and c for continue.
Profiling
If a program runs very slowly or consumes far more memory than we expect,
the problem is most often due to our choice of algorithms or data structures
or due to some inefficient implementation. Some performance verification is
useful though:
? prefer tuples to list with read-only data;
? use generators rather than large lists or tuples to iteration;
? when creating large strings out of small strings, instead of concatenating the small, accumulate them all in a list and join the list of
strings in the end. A good examples is given by the Google Python
Style guide:
[Good]
items = [<table>]
for last_name, first_name in employee_list:
items.append(<tr><td>%s, %s</td></tr> %
(last_name, first_name))
items.append(</table>)
employee_table = .join(items)
[Bad]
employee_table = <table>
for last_name, first_name in employee_list:
employee_table += <tr><td>%s, %s</td></tr> %
(last_name, first_name)
employee_table += </table>
85
import cProfile
cProfile.run(main())
86
4.7
Unit Testing
It is good practice to write tests for individual functions, classes, and methods, to ensure they behave to the expectations. Pythons standard library
provides two unit testing modules: doctest and unittest. There are also
third part testing tools: nose and py.test.
doctest
Use it when writing the tests inside the modules and functions docstrings.
Then just add three line in the end:
if __name__ = "__main__"
import doctest
doctest.testmod()
Test Nomenclature
Test fixtures The code necessary to set up a test (for example, creating
an input file for testing and deleting afterwards).
Test cases The basic unit of testing.
Test suites Collection of test cases, created by the subclass unittest.testcase,
where each method has a name beginning with test.
87
88
Chapter 5
Object-Oriented Design
Suppose we want to define an object in Python to represent a circle. We
could remember about Pythons collections module and create a named
tuple for our circle:
>>> circle = collections.namedtuple("Circle", "x y radius")
>>> circle
<class __main__.Circle>
>>> circle = circle(13, 84, 9)
>>> circle
Circle(x=13, y=84, radius=9)
However, many things are missing here. First, there are no guarantees
that anyone who uses our circle data is not going to type an invalid input
value, such as a negative number for the radius. Second, how could we also
associate to our circle some operations that are proper from it, such as its
area or perimeter?
For the first problem, we can see that the inability to validate when creating an object is a really bad aspect of taking a purely procedural approach
in programming. Even if we decide to include many exceptions handling
the invalid inputs for our circles, we still would have a data container that
is not intrinsically made and validated for its real purpose. Imagine now if
we had chosen a list instead of the named tuple, how would we handle the
fact that lists have sorting properties?
It is clear from the example above that we need to find a way to create
an object that has only the proprieties that we expect it to have. In other
words, we want to find a way to package data and restrict its methods. That
is what object-oriented programming allows you to do: to create your own
89
90
5.1
Classes are the way we can gather special predefined data and methods together. We use them by creating objects, which are instances of a particular
class. The simplest form of a class in Python looks like the following snippet:
class ClassName:
<statement-1>
.g
<statement-N>
>>> x = ClassName() # class instantiation
Class Instantiation
Class instantiation uses function notation to create objects in a known initial state. The instantiation operation creates an empty object which has
individuality. However, multiple names (in multiple scopes) can be bound
to the same object (also know as aliasing). In Python, when an object is
created, first the special method new () is called (the constructor) and
then init () initializes it.
Attributes
Objects have the attributes from their Classes, which are methods and data.
Method attributes are functions whose first argument is the instance on
which it is called to operate (which in Python is conventionally called self).
Attributes are any name following a dot. References to names in modules
are attribute references: in the expression modname.funcname, modname is
a module object and funcname is one of its attribute. Attributes may be
read-only or writeable. Writeable attributes may be deleted with the del
statement.
Namespaces
A namespace is a mapping from names to objects. Most namespaces are
currently implemented as Python dictionaries. Examples of namespaces
91
are: the set of built-in names, the global names in a module, and the local
names in a function invocation. The statements executed by the top-level
invocation of the interpreter, either reading from a script file or interactively,
are considered part of a module called main , so they have their own global
namespace.
Scope
A scope is a textual region of a Python program where a namespace is
directly accessible. Although scopes are determined statically, they are used
dynamically. Scopes are determined textually: the global scope of a function
defined in a module is that modules namespace. When a class definition is
entered, a new namespace is created, and used as the local scope.
5.2
Principles of OOP
Specialization
Specialization (or inheritance) is the procedure of creating a new class that
inherits all the attributes from the super class (also called base class). Any
method can be overridden (reimplemented) in a subclass (in Python, all
methods are virtual). Inheritance is described as an is-a relationship.
Furthermore Google Python Style Guide advices that if a class inherits
from no other base classes, we should explicitly inherit it from Pythons
highest class, object:
class OuterClass(object):
class InnerClass(object):
Polymorphism
Polymorphism (or dynamic method binding) is the principle where methods
can be redefined inside subclasses. In other words, if we have an object
of a subclass and we call a method that is also defined in the superclass,
Python will use the method defined in the subclass. If, for instance, we need
to recover the superclasss method, we can easily call it using the built-in
super().
For example, all instances of a custom class are hashable by default
in Python. This means that the hash() attribute can be called, allowing
92
Aggregation
Aggregation (or composition) defines the process where a class includes one
of more instance variables that are from other classes. It is a has-a relationship. In Python, every class uses inheritance (they are all custom classes
from the object base class), and most use aggregation since most classes
have instance variables of various types.
93
class Circle(Point):
def __init__(self, radius, x=0, y=0):
super().__init__(x,y) # creates/initializes
self.radius = radius
def edge_distance_from_origin(self):
return abs(self.distance_from_origin() - self.radius)
def area(self):
return math.pi*(self.radius**2)
def circumference(self):
return 2*math.pi*self.radius
def __eq__(self, other):
# avoid infinite recursion
return self.radius == other.radius and
super().__eq__(other)
def __repr__(self):
return "circle ({0.radius!r}, {0.x!r})".format(self)
def __str__(self):
return repr(self)
>>> import ShapeClass as shape
>>> a = shape.Point(3,4)
>>> a
point (3, 4)
>>> repr(a)
point (3, 4)
>>> str(a)
(3, 4)
>>> a.distance_from_origin()
5.0
>>> c = shape.Circle(3,2,1)
>>> c
circle (3, 2)
>>> repr(c)
circle (3, 2)
>>> str(c)
circle (3, 2)
>>> c.circumference()
18.84955592153876
>>> c. edge_distance_from_origin()
94
0.7639320225002102
5.3
Decorator Pattern
Decorators (also know as the @ notation) are a tool to elegantly specify some
transformation on functions and methods. The decorator pattern allows us
to wrap an object that provides core functionality with other objects that
alter that functionality. For example, the snippet bellow was copied from
the Google Python Style guide:
class C(object):
def method(self):
method = my_decorator(method)
can be written as
class C(object):
@my_decorator
def method(self):
95
@benchmark
def random_tree(n):
temp = [n for n in range(n)]
for i in range(n+1):
temp[random.choice(temp)] = random.choice(temp)
return temp
if __name__ == __main__:
random_tree(10000)
"""
python3 do_benchmark.py
random_tree 0.04999999999999999
"""
Observer Pattern
The observer pattern is useful when we want to have a core object that
maintains certain values, and then having some observers to create serialized
copies of that object. This can be implemented by using the @properties
decorator, placed before our functions (before def). This will control attribute access, for example, to make an attribute to be read-only. Properties
are used for accessing or setting data instead of simple accessors or setters:
@property
def radius(self):
return self.__radius
Singleton Pattern
A class follows the singleton pattern if it allows exactly one instance of a
certain object to exist. Since Python does not have private constructors,
we use the new class method to ensure that only one instance is ever
created. When we override it, we first check whether our singleton instance
was created. If not, we create it using a super class call:
>>> class SinEx:
96
...
...
...
...
_sing = None
def __new__(self, *args, **kwargs):
if not self._sing:
self._sing = super(SinEx,
self).__new__(self, *args, **kwargs)
...
return self._sing
>>> x = SinEx()
>>> x
<__main__.SinEx object at 0xb72d680c>
>>> y = SinEx()
>>> x == y
True
>>> y
<__main__.SinEx object at 0xb72d680c>
The two objects are equal and are in the same address, so they are the
same object.
5.4
Additional Exercises
def __getitem__(self,key):
return self.get(key)
def __setitem__(self,key,data):
self.put(key,data)
97
98
if __name__ == __main__:
test_HashTable()
Part II
99
Chapter 6
6.1
Stacks
A stack is a linear data structure that can be accessed only at one of its
ends (which we will refers as the top) for either storing or retrieving. In
other words, array access of elements in a stack is restricted and they are
an example of a last-in-first-out (LIFO) structure. You can think of a stack
as a huge pile of books on your desk. Stacks need to have the following
operations running at O(1):
push Insert an item at the top of the stack.
pop Remove an item from the top of the stack.
101
102
6.1. STACKS
103
We will use similar a Node Class in many examples in the rest of these notes.
104
def main():
stack = StackwithNodes()
stack.push(1)
stack.push(2)
stack.push(3)
print(stack.size())
print(stack.peek())
print(stack.pop())
print(stack.peek())
if __name__ == __main__:
main()
6.2
Queues
A queue, differently of a stack, is a structure where the first enqueued element (at the back) will be the first one to be dequeued (when it is at the
front), i.e., a queue is a first-in-first-out (FIFO) structure. You can think of
a queue as a line of people waiting for a roller-coaster ride. Array access of
elements in queues is also restricted and queues should have the following
operations running at O(1):
enqueue Insert an item at the back of the queue.
dequeue Remove an item from the front of the queue.
peek/front Retrieve an item at the front of the queue without removing
it.
empty/size Check whether the queue is empty or give its size.
6.2. QUEUES
The example bellow shows a class for a queue in Python:
[adt/queues/queue.py]
class Queue(object):
a class for a queue
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def enqueue(self, item):
self.items.insert(0, item)
def dequeue(self):
return self.items.pop()
def size(self):
return len(self.items)
def peek(self):
if not self.isEmpty():
return self.items[-1]
else:
raise Exception(Queue is empty.)
def size(self):
return len(self.items)
def main():
queue = Queue()
queue.enqueue(1)
queue.enqueue(2)
queue.enqueue(3)
print(queue.size())
print(queue.peek())
print(queue.dequeue())
print(queue.peek())
if __name__ == __main__:
main()
105
106
However, we have learned that the method insert() for lists in Python
is very inefficient (remember, lists only work on O(1) when we append or
pop at/from their end, because otherwise all of the other elements would
have to be shifted in memory). We can be smarter than that and write an
efficient queue using two stacks (two lists) instead of one:
[adt/queues/queue_from_two_stacks.py]
class Queue(object):
an example of a queue implemented from 2 stacks
def __init__(self):
self.in_stack = []
self.out_stack = []
def enqueue(self, item):
return self.in_stack.append(item)
def dequeue(self):
if self.out_stack:
return self.out_stack.pop()
while self.in_stack:
self.out_stack.append(self.in_stack.pop())
if not self.out_stack:
raise Exception("Queue empty!")
return self.out_stack.pop()
def size(self):
return len(self.in_stack) + len(self.out_stack)
def peek(self):
if self.out_stack:
return self.out_stack[-1]
while self.in_stack:
self.out_stack.append(self.in_stack.pop())
if self.out_stack:
return self.out_stack[-1]
else:
return None
def main():
queue = Queue()
queue.enqueue(1)
queue.enqueue(2)
queue.enqueue(3)
6.2. QUEUES
107
print(queue.size())
print(queue.peek())
print(queue.dequeue())
print(queue.peek())
if __name__ == __main__:
main()
class LinkedQueue(object):
Queue acts as a container for nodes (objects) that are
inserted and removed according FIFO
def __init__(self):
self.front = None
self.back = None
def isEmpty(self):
return bool(self.front) and bool(self.back)
def dequeue(self):
if self.front:
value = self.front.value
self.front = self.front.next
return value
raise Exception(Queue is empty, cannot dequeue.)
def enqueue(self, value):
node = Node(value)
if self.front:
self.back.next = node
else:
self.front = node
self.back = node
return True
108
def size(self):
node = self.front
if node:
num_nodes = 1
node = node.next
while node:
num_nodes += 1
node = node.next
return num_nodes
def peek(self):
return self.front.value
def main():
queue = LinkedQueue()
queue.enqueue(1)
queue.enqueue(2)
queue.enqueue(3)
print(queue.size())
print(queue.peek())
print(queue.dequeue())
print(queue.peek())
if __name__ == __main__:
main()
6.3
Deques
6.3. DEQUES
109
return self.items == []
def addFront(self, item):
self.items.append(item)
def addRear(self, item):
self.items.insert(0, item)
def removeFront(self):
return self.items.pop()
def removeRear(self):
return self.items.pop(0)
def size(self):
return len(self.items)
def __repr__(self):
return {}.format(self.items)
def main():
dq = Deque()
dq.addFront(1)
dq.addFront(2)
dq.addFront(3)
dq.addRear(40)
dq.addRear(50)
print(dq.size())
print(dq)
if __name__ == __main__:
main()
110
Note that we can also specify the size of our deque. For example, we
could have written q = deque(maxlen = 4) in the example above. Another
interesting method for deques is rotate(n), which rotated the deque n steps
to the right or, if n is negative, to the left.
Interestingly, deques in Python are based on a doubly linked list,2 not in
dynamic arrays. It means that operations such as inserting an item anywhere
are fast (O(1)), but arbitrary index accessing can be slow (O(n)).
6.4
Heaps
Conceptually, a heap is a binary tree where each node is smaller (larger) than
its children. We will learn about trees in the next chapters but we should
already keep in mind that when modifications are made, in a balanced tree,
we can repair its structure with O(logn) runtimes. Heaps are generally useful
for applications that repeatedly access the smallest (largest) element in the
list. Moreover min-(max-)heap will let you to find the smallest (largest)
element in O(1) and to extract/add/replace it in O(ln n).
2
Linked lists are another abstract data structure that we will learn about at the end
of this chapter. Doubly here means that their nodes have links to the next and to the
previous node.
111
list1 = [4, 6, 8, 1]
heapq.heapify(list1)
list1
4, 8, 6]
The method heapq.heappop(heap) is used to pop and return the smallest item from the heap:
>>>
[1,
>>>
1
>>>
[4,
list1
4, 8, 6]
heapq.heappop(list1)
list1
6, 8]
112
113
def test_Heapify():
l1 = [3, 2, 5, 1, 7, 8, 2]
h = Heapify(l1)
assert(h.extract_max() == 8)
print ("Tests Passed!")
if __name__ == __main__:
test_Heapify()
114
class Item:
def __init__(self, name):
self.name = name
def __repr__(self):
return "Item({!r})".format(self.name)
def test_PriorityQueue():
push and pop are all O(logN)
q = PriorityQueue()
q.push(Item(test1), 1)
q.push(Item(test2), 4)
q.push(Item(test3), 3)
assert(str(q.pop()) == "Item(test2)")
print(Tests passed!.center(20,*))
if __name__ == __main__:
test_PriorityQueue()
6.5
Linked Lists
A linked list is like a stack (elements added to the head) or a queue (elements
added to the tail), except that we can peek any element in the structure on
O(1), not only the elements at the ends. In general, a linked list is simply
a linear list of nodes containing a value and a pointer (a reference) to the
next node (except for the last), where the reference points to None:
>>> class Node(object):
...
def __init__(self, value=None, next=None):
...
self.value = value
...
self. next = next
We can adapt this node class accept some get and set methods:
class Node(object):
def __init__(self, value):
self.value = value
self.next = None
def getData(self):
return self.value
def getNext(self):
return self.next
def setData(self, newdata):
self.value = newdata
def setNext(self, newnext):
self.next = newnext
115
116
def main():
ll = LinkList()
print(ll.lenght)
ll.addNode(1)
ll.addNode(2)
ll.addNode(3)
print(ll.lenght)
ll.printList()
ll.deleteNode(4)
ll.printList()
print(ll.lenght)
if __name__ == __main__:
main()
117
118
119
ll.addNode(i)
ll.addNode(i+1)
print(Linked List with duplicates:)
ll.printList()
print(Length before deleting duplicate is: , ll.length)
ll.removeDupl_no_buf()
ll.printList()
print(Lenght after deleting duplicates is: , ll.length)
if __name__ == __main__:
main()
Linked lists have a dynamic size at runtime and they are good for when
you have an unknown number of items to store. Insertion is O(1) but deletion and searching can be O(n) because locating an element in a linked list
is slow and is it done by a sequential search. Traversing backward or sorting a linked list are even worse, being both O(n2 ). A good trick to obtain
deletion of a node i at O(1) is copying the data from i + 1 to i and then to
deleting the node i + 1.
120
6.6
Additional Exercises
Stacks
Stacks are very useful when data has to be sorted and retrieved in reverse
order. In the example bellow we use our Stack class to reverse a string:
[adt/stacks/reverse_string_with_stack.py]
import sys
import stack
def reverse_string_with_stack(str1):
s = stack.Stack()
revStr =
for c in str1:
s.push(c)
while not s.isEmpty():
revStr += s.pop()
return revStr
def test_reverse_string_with_stack():
str1 = Buffy is a Slayer!
assert(reverse_string_with_stack(str1) == !reyalS a si yffuB)
print(Tests passed!)
if __name__ == __main__:
test_reverse_string_with_stack()
121
if s.isEmpty():
balanced = False
else:
s.pop()
index = index + 1
if balanced and s.isEmpty():
return True
else:
return False
def test_balance_par_str_with_stack(module_name=this module):
print(balance_par_str_with_stack(((()))))
print(balance_par_str_with_stack((()))
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
test_balance_par_str_with_stack()
122
if __name__ == __main__:
test_dec2bin_with_stack()
The following example implements a stack that has O(1) minimum lookup:
[adt/stacks/stack_with_min.py]
class Stack(list):
def push(self, value):
if len(self) > 0:
last = self[-1]
minimum = self._find_minimum(value, last)
else:
minimum = value
self.minimum = minimum
self.append(NodeWithMin(value, minimum))
def _find_minimum(self, value, last_value):
if value < last_value.minimum:
return value
return last_value.minimum
def min(self):
return self.minimum
class NodeWithMin(object):
def __init__(self, value, minimum):
self.value = value
self.minimum = minimum
def __repr__(self):
return str(self.value)
def min(self):
return self.minimum
def main():
stack = Stack()
stack.push(1)
stack.push(2)
stack.push(3)
node = stack.pop()
123
print(node.minimum)
stack.push(0)
stack.push(4)
node = stack.pop()
print(node.min())
print(stack.min())
print(stack)
if __name__ == __main__:
main()
124
def main():
stack = SetOfStacks()
stack.push(1)
stack.push(2)
stack.push(3)
stack.push(4)
stack.push(5)
stack.push(6)
print(stack)
stack.pop()
stack.pop()
stack.pop()
print(stack)
if __name__ == __main__:
main()
Queues
The example bellow uses the concepts of a queue to rotate an array from
right to left for a given number n:3
[adt/queues/rotating_array.py]
def rotating_array(seq, n):
myqueue = []
for i in range(n):
myqueue.append(seq.pop())
myqueue.reverse()
return myqueue + seq
We could get the same effect using collections.deque with the method rotate(n).
125
test_rotating_array()
Deques
A nice application for a double-ended queue is verifying whether a string is
a palindrome:
[adt/queues/palindrome_checker_with_deque.py]
import sys
import string
import collections
def palindrome_checker_with_deque(str1):
d = collections.deque()
eq = True
strip = string.whitespace + string.punctuation + "\""
for s in str1.lower():
if s not in strip: d.append(s)
while len(d) > 1 and eq:
first = d.pop()
last = d.popleft()
if first != last:
eq = False
return eq
def test_palindrome_checker_with_deque():
str1 = Madam Im Adam
str2 = Buffy is a Slayer
assert(palindrome_checker_with_deque(str1) == True)
assert(palindrome_checker_with_deque(str2) == False)
print(Tests passed!)
if __name__ == __main__:
test_palindrome_checker_with_deque()
126
[adt/heap/find_N_largest_smallest_items_seq.py]
import heapq
def find_N_largest_items_seq(seq, N):
return heapq.nlargest(N,seq)
def find_N_smallest_items_seq(seq, N):
return heapq.nsmallest(N, seq)
def find_smallest_items_seq_heap(seq):
find the smallest items in a sequence using heapify first
heap[0] is always the smallest item
heapq.heapify(seq)
return heapq.heappop(seq)
def find_smallest_items_seq(seq):
if it is only one item, min() is faster
return min(seq)
def find_N_smallest_items_seq_sorted(seq, N):
N ~ len(seq), better sort instead
return sorted(seq)[:N]
def find_N_largest_items_seq_sorted(seq, N):
N ~ len(seq), better sort instead
return sorted(seq)[len(seq)-N:]
def test_find_N_largest_smallest_items_seq(module_name=this
module):
seq = [1, 3, 2, 8, 6, 10, 9]
N = 3
assert(find_N_largest_items_seq(seq, N) == [10, 9, 8])
assert(find_N_largest_items_seq_sorted(seq, N) == [8, 9, 10])
assert(find_N_smallest_items_seq(seq, N) == [1,2,3])
assert(find_N_smallest_items_seq_sorted(seq, N) == [1,2,3])
assert(find_smallest_items_seq(seq) == 1)
assert(find_smallest_items_seq_heap(seq) == 1)
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
127
test_find_N_largest_smallest_items_seq()
Linked List
The following example implements a linked list class from stack methods:
[adt/linked_lists/linked_list_from_stack.py]
class Node(object):
def __init__(self,data=None,next=None):
self.data = data
self.next = next
def setnext(self,next):
self.next = next
4
Note that the result would not be sorted if we just added both lists.
128
def __str__(self):
return "%s" % self.data
class LinkedListStack(object):
def __init__(self, max=0):
self.max = max
self.head = None
self.z = None
self.size = 0
def push(self, data):
if self.size == 0:
self.head = Node(data)
self.size += 1
else:
head = self.head
node = Node(data = data)
self.head = node
node.setnext(head)
def pop(self):
node = self.head.next
self.head = node
def isEmpty(self):
return self.size == 0
def __str__(self):
d = ""
if self.isEmpty(): return ""
else:
temp = self.head
d += "%s\n" % temp
while temp.next != None:
temp = temp.next
d += "%s\n" % temp
return d
def test_ll_from_stack():
ll = LinkedListStack(max = 20)
ll.push("1")
ll.push("2")
ll.push("3")
ll.push("4")
129
print(ll)
ll.pop()
print(ll)
if __name__ == __main__:
test_ll_from_stack()
130
131
132
Chapter 7
Asymptotic Analysis
Asymptotic analysis is a method to describe the limiting behavior and the
performance of algorithms when applied to very large input datasets. To
understand why asymptotic analysis is important, suppose you have to sort
a billion of numbers (n = 109 )1 in a common desktop computer. Suppose
that this computer has a CPU clock time of 1 GHz, which roughly means
that it executes 109 processor cycles (or operations) per second.2 Then, for
an algorithm that has a runtime of O(n2 ), it would take approximately one
billion of seconds to finish the sorting (in the worst case) which means one
entire year!
Another way of visualizing the importance of asymptotic analysis is directly looking to the functions behaviour. In the Fig. 7 we have many
classes of functions plotted together and it is clear that when n increases,
the number of operations for any polynomial or exponential algorithm is
infeasible.
7.1
Complexity Classes
Remember that for memory gigabytes means 10243 = 230 bytes and for storage it
means 10003 = 109 bytes. Also, integers usually take 2 or 4 bytes each. However, for this
example we are simplifying this by saying that a number has 1 byte.
2
In this exercise we are not considering other factors that would make the processing
slower, such as RAM latency, copy cache operations, etc.
133
134
P
The complexity class of decision problems that can be solved on a deterministic Turing machine in polynomial time (in the worst case). If we can turn
a problem into a decision problem, the result would belong to P.
NP
The complexity class of decision problems that can be solved on a nondeterministic Turing machine (NTM) in polynomial time. In other words,
it includes all decision problems whose yes instances can be solved in polynomial time with the NTM. A problem is called complete if all problems
in the class are reduced to it. Therefore, the subclass called NP-complete
(NPC) contains the hardest problems in all of NP.
Any problem that is at least as hard (determined by polynomial-time
reduction) as any problem in NP, but that need not itself be in NP, is
called NP-hard. For example, finding the shortest route through a graph,
7.2. RECURSION
135
P=NP?
The class co-NP is the class of the complements of NP problems. For every
yes answer, we have the no, and vice versa. If NP is truly asymmetric,
then these two classes are different. Although there is overlap between them
because all of P lies in their intersection: both the yes and no instances in
P can be solved in polynomial time with an NTM.
What would happen if a NPC was found in a intersection of N and
co-NP? First, it would mean that all of NP would be inside co-NP, so we
would show NP = co-NP and the asymmetry would disappear. Second,
since all of P is in this intersection, P = NP. If P = NP, we could solve
any (decision) problem that had a practical (verifiable) solution.
However, it is (strongly) believed that NP and co-NP are different. For
instance, no polynomial solution to the problem of factoring numbers was
found, and this problem is in both NP and co-NP.
7.2
Recursion
136
Recursive Relations
To describe the running time of recursive functions, we use recursive relations:
T (n) = a T (g(n)) + f (n),
where a represents the number of recursive calls, g(n) is the size of each
subproblem to be solved recursively, and f (n) is any extra work done in the
function. The following table shows examples of recursive relations:
T (n) = T (n 1) + 1
T (n) = T (n 1) + n
T (n) = 2T (n 1) + 1
T (n) = T (n/2) + 1
T (n) = T (n/2) + n
T (n) = 2T (n/2) + 1
T (n) = 2T (n/2) + n
O(n)
O(n2 )
O(2n )
O(ln n)
O(n)
O(n)
O(n ln n)
Processing a sequence
Handshake problem
Towers of Hanoi
Binary search
Randomized select
Tree transversal
Sort by divide and conquer
7.3
Runtime in Functions
We are now ready to estimate algorithm runtimes. First of all, if the algorithm does not have any recursive calling, we only need to analyse its
data structures and flow blocks. In this case, complexities of code blocks
executed one after the other are just added and complexities of nested loops
are multiplied.
If the algorithm has recursive calls, we can use the recursive functions
from the previous section to find the runtime. When we write a recurrence
relation for a function, we must write two equations, one for the general case
and one for the base case (that should be O(1), so that T (1) = 1). Keeping
this in mind, let us take a look at the example of the algorithm to find the
nth element in a Fibonacci sequence, which is known as to be exponential:
137
[general_poroblems/numbers/find_fibonacci_seq.py]
def find_fibonacci_seq_rec(n):
if n < 2: return n
return find_fibonacci_seq_rec(n - 1) +
find_fibonacci_seq_rec(n - 2)
(7.3.1)
138
O(n2 )
O(n ln n)
quadratic
loglinear
O(n)
O(ln n)
linear
log
O(1)
O(nk )
O(kn )
O(n!)
constant
polynomial
exponential
factorial
Chapter 8
Sorting
The simplest way of sorting a group of items is to start by removing the
smallest item from the group, and putting it first. Then removing the next
smallest, and putting it next and so on. This is clearly an O(n2 ) algorithm, so we need to find a better solution. In this chapter we will look at
many examples of sorting algorithms and analyse their characteristics and
runtimes.
An in-place sort does not use any additional memory to do the sorting
(for example, swapping elements in an array). A stable sort preserves the
relative order of otherwise identical data elements (for example, if two data
elements have identical values, the one that was ahead of the other stays
ahead). In any comparison sort problem, a key is the value (or values) that
determines the sorting order. A comparison sort requires only that there is a
way to determine if a key is less than, equal to, or greater than another key.
Most sorting algorithms are comparison sorts where the worst-case running
time for such sorts can be no better than O(n ln n).
8.1
Quadratic Sort
Insertion Sort
Insertion sort is a simple sorting algorithm with best runtime case runtime
of O(n) and average and worst runtime cases of O(n2 ). It sorts by repeatedly
inserting the next unsorted element in an initial sorted segment of the array.
For small data sets, it can be preferable to more advanced algorithms such
as merge sort or quicksort if the list is already sorted (it is a good way to
add new elements to a presorted list):
139
140
CHAPTER 8. SORTING
[sorting/insertion_sort.py]
def insertion_sort(seq):
for i in range(1, len(seq)):
j = i
while j > 0 and seq[j-1] > seq[j]:
seq[j-1], seq[j] = seq[j], seq[j-1]
j -= 1
return seq
def insertion_sort_rec(seq, i = None):
if i == None: i = len(seq) -1
if i == 0: return i
insertion_sort_rec(seq, i-1)
j = i
while j > 0 and seq[j-i] > seq[j]:
seq[j-1], seq[j] = seq[j], seq[j-1]
j -= 1
return seq
def test_insertion_sort():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2, 5, 4, 1, 5, 3]
assert(insertion_sort(seq) == sorted(seq))
assert(insertion_sort_rec(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_insertion_sort()
Selection Sort
Selection sort is based on finding the smallest or largest element in a list
and exchanging it to the first, then finding the second, etc, until the end is
reached. Even when the list is sorted, it is O(n2 ) (and not stable):
[sorting/selection_sort.py]
def selection_sort(seq):
for i in range(len(seq) -1, 0, -1):
max_j = i
for j in range(max_j):
if seq[j] > seq[max_j]:
141
max_j = j
seq[i], seq[max_j] = seq[max_j], seq[i]
return seq
def test_selection_sort():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2]
assert(selection_sort(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_selection_sort()
Gnome Sort
Gnome sort works by moving forward to find a misplaced value and then
moving backward to place it in the right position:
[sorting/gnome_sort.py]
def gnome_sort(seq):
i = 0
while i < len(seq):
if i ==0 or seq[i-1] <= seq[i]:
i += 1
else:
seq[i], seq[i-1] = seq[i-1], seq[i]
i -= 1
return seq
def test_gnome_sort():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2, 5, 4, 1, 5, 3]
assert(gnome_sort(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_gnome_sort()
142
8.2
CHAPTER 8. SORTING
Linear Sort
Count Sort
Count sort sorts integers with a small value range, counting occurrences
and using the cumulative counts to directly place the numbers in the result,
updating the counts as it goes.
There is a loglinear limit on how fast you can sort if all you know about
your data is that they are greater or less than each other. However, if you
can also count events, sort becomes linear in time, O(n + k):
[sorting/count_sort.py]
from collections import defaultdict
def count_sort_dict(a):
b, c = [], defaultdict(list)
for x in a:
c[x].append(x)
for k in range(min(c), max(c) + 1):
b.extend(c[k])
return b
def test_count_sort():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2, 5, 4, 1, 5, 3]
assert(count_sort_dict(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_count_sort()
If several values have the same key, they will have the original order with
respect with each other, so the algorithm is stable.
8.3
Loglinear Sort
143
Merge Sort
Merge sort divides the list in half to create two unsorted lists. These two
unsorted lists are sorted and merged by continually calling the merge-sort
algorithm, until you get a list of size 1. The algorithm is stable, as well as
fast for large data sets. However, since it is not in-place, it requires much
more memory than many other algorithms. The space complexity is O(n)
for arrays and O(ln n) for linked lists2 . The best, average, and worst case
times are all O(n ln n).
Merge sort is a good choice when the data set is too large to fit into the
memory. The subsets can be written to disk in separate files until they are
small enough to be sorted in memory. The merging is easy, and involves
just reading single elements at a time from each file and writing them to the
final file in the correct order:
[sorting/merge_sort.py]
O(log(n))
def merge_sort(seq):
if len(seq) < 2 : return seq
mid = len(seq)//2
left, right = None, None
if seq[:mid]: left = merge_sort([:mid])
if seq[mid:]: right = merge_sort([mid:])
return merge_n(left,right)
#O(2n)
def merge_2n(left, right):
if not left or not right:
return left or right
result = []
1
Timsort is a hybrid sorting algorithm, derived from merge sort and insertion sort, and
invented by Tim Peters for Python.
2
Never ever consider to sort a linked list tough, it is problem the worst option you have
in terms of runtime complexity.
144
CHAPTER 8. SORTING
while left and right :
if left[-1] >= right[-1]:
result.append(left.pop())
else:
result.append(right.pop())
result.reversed()
return (left or right) + result
#O(n)
def merge_n(left,right):
if not left or not right:
return left or right
result = []
i,j = 0,0
while i < len(left) and j < len(right):
if left[i] <= right[i]:
result.append(left[i])
i+=1
else :
result.append(right[j])
j+=1
if i < len(left) - 1 : result+=left[i:]
elif j < len(right) - 1 : result += right[j:]
return result
def test_merge_sort():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2]
assert(merge_sort(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_merge_sort()
Quick Sort
Quick sort works by choosing a pivot and partitioning the array so that the
elements that are smaller than the pivot goes to the left. Then, it recursively
sorts the left and right parts.
The choice of the pivot value is a key to the performance. It can be
shown that always choosing the value in the middle of the set is the best
choice for already-sorted data and no worse than most other choices for
random unsorted data.
145
The worst case is O(n2 ) in the rare cases when partitioning keeps producing a region of n 1 elements (when the pivot is the minimum value).
The best case produces two n/2-sized lists. This and the average case are
both O(n ln n). The algorithm is not stable.
[sorting/quick_sort.py]
def quick_sort(seq):
if len(seq) < 2 : return seq
mid = len(seq)//2
pi = seq[mid]
seq = seq[:mid] + seq[mid+1:]
lo = [x for x in seq if x <= pi]
hi = [x for x in seq if x > pi]
return quick_sort(lo) + [pi] + quick_sort(hi)
def test_quick_sort():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2]
assert(quick_sort(seq) == sorted(seq))
assert(quick_sort_divided(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_quick_sort()
Heap Sort
Heap sort is similar to a selection sort, except that the unsorted region is a
heap, so finding the largest element n times gives a loglinear runtime.
In a heap, for every node other than the root, the value of the node is at
least (at most) the value of its parent. Thus, the smallest (largest) element is
stored at the root and the subtrees rooted at a node contain larger (smaller)
values than does the node itself.
Although the insertion is only O(1), the performance of validating (the
heap order) is O(ln n). Searching (traversing) is O(n). In Python, a heap
sort can be implemented by pushing all values onto a heap and then popping
off the smallest values one at a time:
[sorting/heap_sort1.py]
import heapq
146
CHAPTER 8. SORTING
def heap_sort1(seq):
heap sort with Pythons heapq
h = []
for value in seq:
heapq.heappush(h, value)
return [heapq.heappop(h) for i in range(len(h))]
def test_heap_sort1():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2]
assert(heap_sort1(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_heap_sort1()
If we decide to use the heap class that we have from the last chapters,
we can write a heap sort simply by:
[sorting/heap_sort2.py]
from heap import Heap
def heap_sort2(seq):
heap = Heap(seq)
res = []
for i in range(len(seq)):
res.insert(0, heap.extract_max())
return res
def test_heap_sort2():
seq = [3, 5, 2, 6, 8, 1, 0, 3, 5, 6, 2]
assert(heap_sort2(seq) == sorted(seq))
print(Tests passed!)
if __name__ == __main__:
test_heap_sort2()
147
148
8.4
CHAPTER 8. SORTING
8.5
149
Additional Exercises
Quadratic Sort
The following program implements a bubble sort, a very inefficient sorting
algorithm:
[searching/bubble_sort.py]
def bubble_sort(seq):
size = len(seq) -1
for num in range(size, 0, -1):
for i in range(num):
if seq[i] > seq[i+1]:
temp = seq[i]
seq[i] = seq[i+1]
seq[i+1] = temp
return seq
def test_bubble_sort(module_name=this module):
seq = [4, 5, 2, 1, 6, 2, 7, 10, 13, 8]
assert(bubble_sort(seq) == sorted(seq))
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
test_bubble_sort()
Linear Sort
The example bellow shows a simple count sort for people ages:
def counting_sort_age(A):
oldestAge = 100
timesOfAge = [0]*oldestAge
ageCountSet = set()
B = []
for i in A:
timesOfAge[i] += 1
ageCountSet.add(i)
for j in ageCountSet:
count = timesOfAge[j]
150
CHAPTER 8. SORTING
while count > 0:
B.append(j)
count -= 1
return B
The example bellow uses quick sort to find the k largest elements in a
sequence:
[sorting/find_k_largest_seq_quicksort.py]
import random
def swap(A, x, y):
tmp = A[x]
A[x] = A[y]
A[y] = tmp
def qselect(A, k, left=None, right=None):
left = left or 0
right = right or len(A) - 1
pivot = random.randint(left, right)
pivotVal = A[pivot]
swap(A, pivot, right)
swapIndex, i = left, left
while i <= right - 1:
if A[i] < pivotVal:
swap(A, i, swapIndex)
swapIndex += 1
i += 1
swap(A, right, swapIndex)
rank = len(A) - swapIndex
if k == rank:
return A[swapIndex]
elif k < rank:
return qselect(A, k, left=swapIndex+1, right=right)
else:
return qselect(A, k, left=left, right=swapIndex-1)
151
152
CHAPTER 8. SORTING
Chapter 9
Searching
The most common searching algorithms are the sequential search and the
binary search. If an input array is not sorted, or the input elements are
accommodated by dynamic containers (such as linked lists), the search has
to be sequential. If the input is a sorted array, the binary search algorithm
is the best choice. If we are allowed to use auxiliary memory, a hash table
might help the search, with which a value can be located in O(1) time with
a key.
9.1
Sequential Search
153
154
CHAPTER 9. SEARCHING
assert(sequential_search(seq, n2) == False)
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
test_sequential_search()
Now, if we sort the sequence first, we can improve the sequential search
in the case when the item is not present to have the same runtimes as when
the item is present:
[searching/ordered_sequential_search.py]
def ordered_sequential_search(seq, n):
item = 0
for item in seq:
if item > n: return False
if item == n: return True
return False
def test_ordered_sequential_search(module_name=this module):
seq = [1, 2, 4, 5, 6, 8, 10]
n1 = 10
n2 = 7
assert(ordered_sequential_search(seq, n1) == True)
assert(ordered_sequential_search(seq, n2) == False)
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
test_ordered_sequential_search()
9.2
Binary Search
A binary search finds the position of a specified input value (the key) within
a sorted array. In each step, the algorithm compares the search key value
with the key value of the middle element of the array. If the keys match
the items index, (position) is returned. Otherwise, if the search key is less
than the middle elements key, the algorithm repeats the process in the left
subarray; or if the search key is greater, on the right subarray. The algorithm
155
Note that the module returns the index after the key, which is where you
should place the new value. Other available functions are bisect right and
bisect left.
156
CHAPTER 9. SEARCHING
9.3
Additional Exercises
Searching in a Matrix
The following module searches an entry in a matrix where the rows and
columns are sorted. In this case, every row is increasingly sorted from left
to right, and every column is increasingly sorted from top to bottom. The
runtime is linear on O(m + n):
[general_problems/numbers/search_entry_matrix.py]
def find_elem_matrix_bool(m1, value):
found = False
row = 0
col = len(m1[0]) - 1
while row < len(m1) and col >= 0:
if m1[row][col] == value:
found = True
break
elif m1[row][col] > value:
col-=1
else:
row+=1
return found
def test_find_elem_matrix_bool(module_name=this module):
m1 = [[1,2,8,9], [2,4,9,12], [4,7,10,13], [6,8,11,15]]
assert(find_elem_matrix_bool(m1,8) == True)
assert(find_elem_matrix_bool(m1,3) == False)
m2 = [[0]]
assert(find_elem_matrix_bool(m2,0) == True)
s = Tests in {name} have {con}!
print(s.format(name=module_name, con=passed))
if __name__ == __main__:
test_find_elem_matrix_bool()
157
[searching/searching_in_a_matrix.py]
import numpy
def searching_in_a_matrix(m1, value):
rows = len(m1)
cols = len(m1[0])
lo = 0
hi = rows*cols
while lo < hi:
mid = (lo + hi)//2
row = mid//cols
col = mid%cols
v = m1[row][col]
if v == value: return True
elif v > value: hi = mid
else: lo = mid+1
return False
def test_searching_in_a_matrix():
a = [[1,3,5],[7,9,11],[13,15,17]]
b = numpy.array([(1,2),(3,4)])
assert(searching_in_a_matrix(a, 13) == True)
assert(searching_in_a_matrix(a, 14) == False)
assert(searching_in_a_matrix(b, 3) == True)
assert(searching_in_a_matrix(b, 5) == False)
print(Tests passed!)
if __name__ == __main__:
test_searching_in_a_matrix()
Unimodal Arrays
An array is unimodal if it consists of an increasing sequence followed by
a decreasing sequence. The example below shows how to find the locally
maximum of an array using binary search:
[searching/find_max_unimodal_array.py]
def find_max_unimodal_array(A):
if len(A) <= 2 : return None
left = 0
158
CHAPTER 9. SEARCHING
right = len(A)-1
while right > left +1:
mid = (left + right)//2
if A[mid] > A[mid-1] and A[mid] > A[mid+1]:
return A[mid]
elif A[mid] > A[mid-1] and A[mid] < A[mid+1]:
left = mid
else:
right = mid
return None
def test_find_max_unimodal_array():
seq = [1, 2, 5, 6, 7, 10, 12, 9, 8, 7, 6]
assert(find_max_unimodal_array(seq) == 12)
print(Tests passed!)
if __name__ == __main__:
test_find_max_unimodal_array()
159
print(Tests passed!)
if __name__ == __main__:
test_ind_sqrt_bin_search()
Intersection of Arrays
The snippet bellow shows three ways to perform the intersection of two
sorted arrays. The simplest way is to use sets, however this will not preserve
the ordering. The second example uses an adaptation of the merge sort. The
160
CHAPTER 9. SEARCHING
third example is suitable when one of the arrays is much larger than other.
In this case, binary search is the best option:
[searching/intersection_two_arrays.py]
def intersection_two_arrays_sets(seq1, seq2):
find the intersection of two arrays using set proprieties
set1 = set(seq1)
set2 = set(seq2)
return set1.intersection(set2) #same as list(set1 & set2
161
162
CHAPTER 9. SEARCHING
Chapter 10
Dynamic Programming
Dynamic programming is used to simplify a complicated problem by breaking
it down into simpler subproblems by means of recursion. If a problem has
an optimal substructure and overlapping subproblems, it may be solved by
dynamic programming.
Optimal substructure means that the solution to a given optimization
problem can be obtained by a combination of optimal solutions to its subproblems. The first step to utilize dynamic programming is to check whether
the problem exhibits such optimal substructure. The second step is having
overlapping problems by solving subproblems once and then storing the solution to be retrieved. A choice is made at each step in dynamic programming,
and the choice depends on the solutions to subproblems, bottom-up manner,
from smaller subproblems to larger subproblems.
10.1
Memoization
163
164
10.2
165
Additional Exercises
def naive_longest_inc_subseq(seq):
exponential solution to the longest increasing subsequence
problem
for length in range(len(seq), 0, -1):
for sub in combinations(seq, length):
if list(sub) == sorted(sub):
return len(sub)
def longest_inc_subseq1(seq):
iterative solution for the longest increasing subsequence
problem
end = []
for val in seq:
idx = bisect(end, val)
if idx == len(end): end.append(val)
else: end[idx] = val
return len(end)
def longest_inc_subseq2(seq):
another iterative algorithm for the longest increasing
subsequence problem
1
See other versions of this problem in the end of the chapter about lists in Python.
166
def memoized_longest_inc_subseq(seq):
memoized recursive solution to the longest increasing
subsequence problem
@memo
def L(cur):
res = 1
for pre in range(cur):
if seq[pre] <= seq[cur]:
res = max(res, 1 + L(pre))
return res
return max(L(i) for i in range(len(seq)))
@benchmark
def test_naive_longest_inc_subseq():
print(naive_longest_inc_subseq(s1))
benchmark
def test_longest_inc_subseq1():
print(longest_inc_subseq1(s1))
@benchmark
def test_longest_inc_subseq2():
print(longest_inc_subseq2(s1))
@benchmark
def test_memoized_longest_inc_subseq():
print(memoized_longest_inc_subseq(s1))
if __name__ == __main__:
from random import randrange
s1 = [randrange(100) for i in range(40)]
print(s1)
test_naive_longest_inc_subseq()
test_longest_inc_subseq1()
test_longest_inc_subseq2()
167
168
Part III
169
Chapter 11
Introduction to Graphs
11.1
Basic Definitions
171
172
Degree in a Node
The number of undirected edges incident on a node is called degree. Zerodegree graphs are called isolated. For directed graphs, we can split this number into in-degree (incoming edges/parents) and out-degree/children (outgoing edges).
Paths, Walks, and Cycle
A path in G is a subgraph where the edges connect the nodes in a sequence,
without revisiting any node. In a directed graph, a path has to follow the
directions of the edges.
A walk is an alternating sequence of nodes and edges that allows nodes
and edges to be visited multiple times.
A cycle is like a path except that the last edge links the last node to the
first.
Length of a Path
The length of a path or walk is the value given by its edge count.
Weight of an Edge
Associating weights with each edge in G gives us a weighted graph. The
weight of a path or cycle is the sum of its edge weights. So, for unweighted
graphs, it is simply the number of edges.
Planar Graphs
A graph that can be drawn on the plane without crossing edges is called
planar. This graph has regions, which are areas bounded by the edges.The
Eulers formula for connected planar graphs says that V E + F = 2, where
V, E, F are the number of nodes, edges, and regions, respectively.
Graph Traversal
A traversal is a walk through all the connected components of a graph. The
main difference between graph traversals is the ordering of the to-do list
among the unvisited nodes that have been discovered.
173
11.2
Adjacent Lists
For each node in an adjacent list, we have access to a list (or set or container
or iterable) of its neighbor. Supposing we have n nodes, each adjacent (or
neighbor) list is just a list of such numbers. We place the lists into a main
list of size n, indexable by the node numbers, where the order is usually
arbitrary.
Using Sets as Adjacent Lists:
We can use Pythons set type to implement adjacent lists:
>>> a,b,c,d,e,f = range(6) # nodes
174
Deleting objects from the middle of a Python list is O(n), but deleting
from the end is only O(1). If the order of neighbors is not important, you
can delete an arbitrary neighbor in O(1) time by swapping it in to the last
item in the list and then calling pop().
Using Dictionaries as Adjacent Lists:
Finally, we can use dictionaries as adjacent lists. In this case, the neighbors
would be the keys, and we are able to associate each of them with some
extra value, such as an edge weight:
>>> a,b,c,d,e,f = range(6) # nodes
>>> N = [{b:2,c:1,d:4,f:1}, {a:4,d:1,f:4}, {a:1,b:1,d:2,e:4},
{a:3,e:2}, {a:3,b:4,c:1}, {b:1,c:2,d:4,e:3}]
>>> b in N[a] # membership
True
>>> len(N[f]) # degree
4
175
Adjacent Matrices
In adjacent matrices, instead of listing all the neighbors for each node, we
have one row with one position for each possible neighbor, filled with True
and False values. The simplest implementation of adjacent matrices is given
by nested lists. Note that the diagonal is always False:
>>> a,b,c,d,e,f = range(6) # nodes
>>> N = [[0,1,1,1,0,1], [1,0,0,1,0,1], [1,1,0,1,1,0], [1,0,0,0,1,0],
[1,1,1,0,0,0], [0,1,1,1,1,0]]
>>> N[a][b] # membership
1
>>> N[a][e]
0
>>> sum(N[f]) # degree
4
176
11.3
Introduction to Trees
Representing Trees
The simplest way of representing a tree is by a nested lists:
>>>
>>>
a
>>>
b
>>>
d
>>>
f
>>>
177
c
>>> T[2][1][1]
g
if __name__ == __main__:
main()
In the next chapter we will learn how to improve this class, including
many features and methods that a tree can hold. For now, it is useful to
178
keep in mind that when we are prototyping data structures such as trees, we
should always be able to come up with a flexible class to specify arbitrary
attributes in the constructor. The following program implements what is
referred to as a bunch class;, a generic tool that is a specialization of the
Pythons dict class and that let you create and set arbitrary attributes on
the fly:
[trees/simple_trees/bunchclass.py]
class BunchClass(dict):
def __init__(self, *args, **kwds):
super(BunchClass, self).__init__(*args, **kwds)
self.__dict__ = self
def main():
{right: {right: Xander, left: Willow}, left:
{right: Angel, left: Buffy}}
bc = BunchClass
# notice the absence of ()
tree = bc(left = bc(left="Buffy", right="Angel"), right =
bc(left="Willow", right="Xander"))
print(tree)
if __name__ == __main__:
main()
In the example above, the functions arguments *args and **kwds can
hold an arbitrary number of arguments and an arbitrary number of keywords
arguments, respectively.
Chapter 12
Binary Trees
12.1
Basic Concepts
Binary trees are tree data structures where each node has at most two child
nodes: the left and the right. Child nodes may contain references to their
parents. The root of a tree (the ancestor of all nodes) can exist either inside
or outside the tree.
Binary trees can be seen as a way of passing an initial number n of tokens
down, meaning that at any point in the tree the sum of all the horizontal
nodes will be n. The degree of every node is maximum two. Supposing that
an arbitrary rooted tree has m internal nodes and each internal node has
exactly two children, if the tree has n leaves, the degree of the tree is n 1:
2m = n + m 1 m = n 1,
i.e a tree with n nodes has exactly n 1 branches or degree.
12.2
The simplest (and silliest) way to represent a binary tree is using Pythons
lists. The following module constructs a list with a root and two empty
sublists for the children. To add a left subtree to the root of a tree, we
insert a new list into the second position of the root list. Note that this
algorithm is not very efficient due to the restrictions that Pythons lists
have on inserting and popping in the middle::
179
180
Figure 12.1: The height (h) and width (number of leaves) of a (perfectly
balanced) binary tree.
[trees/binary_trees/BT_lists.py]
def BinaryTreeList(r):
return [r, [], []]
def insertLeft(root, newBranch):
t = root.pop(1)
if len(t) > 1:
root.insert(1,[newBranch,t,[]])
else:
root.insert(1,[newBranch, [], []])
return root
def insertRight(root, newBranch):
t = root.pop(2)
if len(t) > 1:
root.insert(2,[newBranch,[],t])
else:
root.insert(2,[newBranch,[],[]])
return root
def getRootVal(root):
return root[0]
def setRootVal(root, newVal):
181
root[0] = newVal
def getLeftChild(root):
return root[1]
def getRightChild(root):
return root[2]
def main():
3
[5, [4, [], []], []]
[7, [], [6, [], []]]
r = BinaryTreeList(3)
insertLeft(r,4)
insertLeft(r,5)
insertRight(r,6)
insertRight(r,7)
print(getRootVal(r))
print(getLeftChild(r))
print(getRightChild(r))
if __name__ == __main__:
main()
However this method is not very practical when we have many branches
(or at least it needs many improvements, for example, how it manages the
creation of new lists and how it displays or searches for new elements).
A more natural way to handle binary trees is (again) by representing
it as a collection of nodes. A simple node in a binary tree should carry
attributes for value and for left and right children, and it can have a method
to identify leaves:
[trees/binary_trees/BT.py]
class BT(object):
def __init__(self, value):
self.value = value
self.left = None
self.right = None
def is_leaf(self):
182
def tests_BT():
"""
1
2
3
4 5
6
7
"""
tree = BT(1)
tree.insert_left(2)
tree.insert_right(3)
tree.left().insert_left(4)
tree.left().insert_right(5)
tree.right().insert_left(6)
tree.right().insert_right(7)
print(tree.right().right())
tree.right().right().value(8)
print(tree.right().right())
assert(tree.right().is_leaf() == False)
assert(tree.right().right().is_leaf() == True)
print("Tests Passed!")
if __name__ == __main__:
tests_BT()
12.3
183
A binary search tree (BST) is a node-based binary tree data structure which
has the following properties:
1. The left subtree of a node contains only nodes with keys less than the
nodes key.
2. The right subtree of a node contains only nodes with keys greater than
the nodes key.
3. Both the left and right subtrees must also be a binary search tree.
4. There must be no duplicate nodes.
If the binary search tree is balanced, the following operations are O(ln n):
(i) finding a node with a given value (lookup), (ii) finding a node with
maximum or minimum value, and (iii) insertion or deletion of a node.
184
def main():
"""
4
2
6
1 3 5 7
"""
tree = BST()
tree.insert(4)
tree.insert(2)
tree.insert(6)
tree.insert(1)
tree.insert(3)
tree.insert(7)
tree.insert(5)
print(tree.get_right())
print(tree.get_right().get_left())
print(tree.get_right().get_right())
print(tree.get_left())
print(tree.get_left().get_left())
print(tree.get_left().get_right())
assert(tree.find(30) == None)
185
if __name__ == __main__:
main()
There are many other ways that a tree can be created. We could, for
instance, think of two classes, one simply for nodes, and a second one that
controls these nodes. This is not much different from the previous example
(and in the end of this chapter we will see a third hybrid example of these
two):
[trees/binary_trees/BST_with_Nodes.py]
class Node(object):
def __init__(self, value):
self.value = value
self.left = None
self.right = None
def __repr__(self):
return {}.format(self.value)
class BSTwithNodes(object):
def __init__(self):
self.root = None
def insert(self, value):
if not self.root:
self.root = Node(value)
else:
current = self.root
while True:
if value < current.value:
if current.left:
current = current.left
else:
current.left = Node(value)
break;
elif value > current.value:
if current.right:
current = current.right
else:
current.right = Node(value)
break;
186
def main():
"""
BST
4
2
6
1 3 5 7
"""
tree = BSTwithNodes()
l1 = [4, 2, 6, 1, 3, 7, 5]
for i in l1: tree.insert(i)
print(tree.root)
print(tree.root.right)
print(tree.root.right.left)
print(tree.root.right.right)
print(tree.root.left)
print(tree.root.left.left)
print(tree.root.left.right)
if __name__ == __main__:
main()
12.4
Self-Balancing BST
A balanced tree is a tree where the differences of the height of every subtree
is at most equal to 1. A self-balancing binary search tree is any node-based
binary search tree that automatically keeps itself balanced. By applying a
balance condition we ensure that the worst case runtime of common tree
operations will be at most O(ln n).
Balancing Factor of a Tree
A balancing factor can be attributed to each internal node in a tree, being
the difference between the heights of the left and right subtrees. There
are many balancing methods for trees, but they are usually based on two
operations:
? Node splitting (and merging): nodes are not allowed to have more
than two children, so when a node become overfull it splits into two
187
subnodes.
? Node rotations: process of switching edges. If x is the parent of y, we
make y the parent of x and x must take over one of the children of y.
AVL Trees
An AVL tree is a binary search tree with a self-balancing condition where
the difference between the height of the left and right subtrees cannot be
more than one.
To implement an AVL tree, we can start by adding a self-balancing
method to our BST classes, called every time we add a new node to the
tree. The method works by continuously checking the height of the tree,
which is added as a new attribute:
def height(node):
if node is None:
return -1
else:
return node.height
def update_height(node):
node.height = max(height(node.left), height(node.right)) + 1
Now we can go ahead and implement the rebalancing method for our
tree. The method will check whether the difference between the new heights
of the right and left subtrees are up to 1. If this is not true, the method will
perform the rotations:
def rebalance(self, node):
while node is not None:
update_height(node)
if height(node.left) >= 2 + height(node.right):
if height(node.left.left) >= height(node.left.right):
self.right_rotate(node)
else:
self.left_rotate(node.left)
self.right_rotate(node)
elif height(node.right) >= 2 + height(node.left):
if height(node.right.right) >=
height(node.right.left):
self.left_rotate(node)
188
We are now ready to write the entire AVL tree class! In the following
code we have used our old BST class as a superclass, together with the
methods we have described above. In addition, two methods for traversals
189
were used, and we will explain them better in the next chapter. For now, it
is good to keep the example in mind and that this AVL tree indeed supports
insert, find, and delete-min operations at O(ln n) time:
[trees/binary_trees/avl.py]
from BST_with_Nodes import BSTwithNodes, Node
class AVL(BSTwithNodes):
def __init__(self):
self.root = None
def left_rotate(self, x):
y = x.right
y.value = x.value
if y.value is None:
self.root = y
else:
if y.value.left is x:
y.value.left = y
elif y.value.right is x:
y.value.right = y
x.right = y.left
if x.right is not None:
x.right.value = x
y.left = x
x.value = y
update_height(x)
update_height(y)
def right_rotate(self, x):
y = x.left
y.value = x.value
if y.value is None:
self.root = y
else:
if y.value.left is x:
y.value.left = y
elif y.value.right is x:
y.value.right = y
x.left = y.right
if x.left is not None:
x.left.value = x
y.right = x
x.value = y
190
191
192
Red-black Trees
Red-black trees are an evolution of a binary search trees that aim to keep the
tree balanced without affecting the complexity of the primitive operations.
This is done by coloring each node in the tree with either red or black and
preserving a set of properties that guarantees that the deepest path in the
tree is not longer than twice the shortest one.
Red-black trees have the following properties:
? Every node is colored with either red or black.
? All leaf (nil) nodes are colored with black; if a nodes child is missing
then we will assume that it has a nil child in that place and this nil
child is always colored black.
? Both children of a red node must be black nodes.
? Every path from a node n to a descendent leaf has the same number
of black nodes (not counting node n). We call this number the black
height of n.
Binary Heaps
Binary heaps are complete balanced binary trees. The heap property makes
it easier to maintain the structure, i.e., the balance of the tree. There is no
need to modify a structure of the tree by splitting or rotating nodes in a
heap: the only operation will be swapping parent and child nodes.
In a binary heap, the root (the smallest or largest element) is always
found in h[0]. Considering a node at index i:
? the parent index is
i1
2 ,
12.5
193
Additional Exercises
3
5
7
9
--->
--->
--->
--->
--->
level
level
level
level
level
0
1
2
3
4
194
def __repr__(self):
Private method for this class string representation
return {}.format(self.item)
def _getDFTpreOrder(self, node):
Traversal Pre-Order, O(n)
if node:
if node.item: self.traversal.append(node.item)
self._getDFTpreOrder(node.left)
self._getDFTpreOrder(node.right)
return self
def _printDFTpreOrder(self, noderoot):
Fill the pre-order traversal array
self.traversal = []
self._getDFTpreOrder(noderoot)
return self.traversal
def _getDFTinOrder(self, node):
Traversal in-Order, O(n)
if node:
self._getDFTinOrder(node.left)
if node.item: self.traversal.append(node.item)
self._getDFTinOrder(node.right)
195
196
class BinaryTree(object):
>>> bt = BinaryTree()
>>> for i in range(1, 10): bt.addNode(i)
>>> bt.hasNode(7)
True
>>> bt.hasNode(12)
False
>>> bt.printTree()
[1, 2, 4, 6, 8, 9, 7, 5, 3]
>>> bt.printTree(pre)
[1, 2, 4, 6, 8, 9, 7, 5, 3]
>>> bt.printTree(bft)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> bt.printTree(post)
[8, 9, 6, 7, 4, 5, 2, 3, 1]
>>> bt.printTree(in)
[8, 6, 9, 4, 7, 2, 5, 1, 3]
197
198
def __init__(self):
Construtor for the Binary Tree, which is a container of
Nodes
self.root = None
199
def __repr__(self):
Private method for this class string representation
return {}.format(self.item)
def printTree(self, order = pre):
Print Tree in the chosen order
if self.root:
if order == pre: return
self.root._printDFTpreOrder(self.root)
elif order == in: return
self.root._printDFTinOrder(self.root)
elif order == post: return
self.root._printDFTpostOrder(self.root)
elif order == bft: return
self.root._printBFT(self.root)
else: raise Exception(Tree is empty)
def hasNode(self, value):
Verify whether the node is in the Tree
return bool(self.root._findNode(value))
def isLeaf(self, value):
Return True if the node is a Leaf
node = self.root._searchForNode(value)
return node._isLeaf()
def getNodeLevel(self, item):
Return the level of the node, best O(1), worst O(n)
node = self.root._searchForNode(item)
if node: return node.level
else: raise Exception(Node not found)
def getSizeTree(self):
Return how many nodes in the tree, O(n)
return len(self.root._printDFTpreOrder(self.root))
def isRoot(self, value):
200
201
202
if __name__ == __main__:
import doctest
doctest.testmod()
203
9
5
8
6
10
--->
--->
--->
--->
level
level
level
level
0
1
2
3
class NodeBST(NodeBT):
def _addNextNode(self, value, level_here=1):
Aux for self.addNode(value): for BST, best O(1), worst
O(log n)
self.traversal = []
new_node = NodeBST(value, level_here)
if not self.item:
self.item = new_node
elif value < self.item:
self.left = self.left and self.left._addNextNode(value,
level_here+1) or new_node
else:
self.right = self.right and
self.right._addNextNode(value, level_here+1) or
new_node
return self
204
class BinarySearchTree(BinaryTree):
205
206
Chapter 13
13.1
Depth-First Search
207
Postorder: Visit a node after traversing all subtrees (left right root):
def postorder(root):
if root != 0:
postorder(root.left)
postorder(root.right)
yield root.value
Inorder: Visit a node after traversing its left subtree but before the right
subtree (left root right):
def inorder(root):
if root != 0:
inorder(root.left)
yield root.value
inorder(root.right)
13.2
Breadth-First Search
209
point. Traditionally, BFSs are implemented using a list to store the values
of the visited nodes and then a FIFO queue to store those nodes that have
yet to be visited. The total runtime is also O(V + E).
13.3
There are many ways we could write traversals. In the following code we
use the BST with nodes class, defined in the last chapter, to implement
each of the traversal algorithms. For the DFS cases, we have also tested two
different methods:
[trees/traversals/BST_with_Nodes_traversal.py]
from BST_with_Nodes import BSTwithNodes, Node
class BSTTraversal(BSTwithNodes):
def __init__(self):
self.root = None
self.nodes_BFS = []
self.nodes_DFS_pre = []
self.nodes_DFS_post = []
self.nodes_DFS_in = []
def BFS(self):
self.root.level = 0
queue = [self.root]
current_level = self.root.level
while len(queue) > 0:
current_node = queue.pop(0)
if current_node.level > current_level:
current_level += 1
self.nodes_BFS.append(current_node.value)
if current_node.left:
current_node.left.level = current_level + 1
queue.append(current_node.left)
if current_node.right:
current_node.right.level = current_level + 1
queue.append(current_node.right)
13.4
211
Additional Exercises
def preorder2(self):
self.nodes = []
current = self.bst
stack = []
while len(stack) > 0 or current is not None:
if current is not None:
self.nodes.append(current.value)
stack.append(current)
current = current.left
else:
current = stack.pop()
current = current.right
return self.nodes
def main():
"""
10
5
15
1 6
11
50
"""
t = TranversalBST()
t.insert(10)
t.insert(5)
t.insert(15)
t.insert(1)
t.insert(6)
t.insert(11)
t.insert(50)
print(t.preorder())
print(t.preorder2())
print(t.inorder())
if __name__ == __main__:
213
main()
50
60
70
80
"""
t = BSTwithExtra()
l1 = [10, 5, 15, 1, 6, 11, 50, 60, 70, 80]
for i in l1: t.insert(i)
print(t.inorder())
print(t.preorder())
assert(t.get_max_depth() == 5)
assert(t.get_min_depth() == 2)
assert(t.is_balanced() == 3)
assert(t.get_inorder(10) == 3)
assert(t.get_preorder(10) == 0)
"""
1
2
4 5
3
6
215
"""
t2 = BSTwithExtra()
l2 = [1, 2, 3, 4, 5, 6, 7, 8]
for i in l2: t2.insert(i)
print(t2.inorder())
print(t2.preorder())
assert(t2.is_balanced() == 0)
print("Tests Passed!")
if __name__ == __main__:
main()
Ancestor in a BST
The example bellow finds the lowest level common ancestor of two nodes in
a binary search tree:
[trees/traversals/BST_ancestor.py]
from BST_traversal import TranversalBST
def find_ancestor(path, low_value, high_value):
find the lowest ancestor in a BST
while path:
current_value = path[0]
if current_value < low_value:
try:
path = path[2:]
except:
return current_value
elif current_value > high_value:
try:
path = path[1:]
except:
return current_value
elif low_value <= current_value <= high_value:
return current_value
return None
def test_find_ancestor():
"""
50]
1, 6) == 5)
1, 11) == 10)
11, 50) == 15)
5, 15) == 10)
Bibliography
Websites:
[Interactive Python] http://interactivepython.org
Books:
[A nice Book for Software Eng. Interviews] Cracking the Coding Interview, Gayle Laakmann McDowell, 2013
217
218
BIBLIOGRAPHY
[A nice Python 3 Book] Programming in Python 3: A Complete Introduction to the Python 3.1 Language, Mark Summerfield, 2011
[A nice Python Book] Learn Python The Hard Way, Zed A. Shaw, 2010