Data Structures Algorithms Using Kotlin Second Edition Hemant Jain Online Ebook Texxtbook Full Chapter PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

Data Structures & Algorithms using

Kotlin, Second Edition Hemant Jain


Visit to download the full and correct content document:
https://ebookmeta.com/product/data-structures-algorithms-using-kotlin-second-edition
-hemant-jain/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Problem Solving in Data Structures Algorithms Using C


2nd Edition Hemant Jain

https://ebookmeta.com/product/problem-solving-in-data-structures-
algorithms-using-c-2nd-edition-hemant-jain/

A Common Sense Guide to Data Structures and Algorithms


Second Edition Jay Wengrow

https://ebookmeta.com/product/a-common-sense-guide-to-data-
structures-and-algorithms-second-edition-jay-wengrow/

Data Structures & Algorithms in Python John Canning

https://ebookmeta.com/product/data-structures-algorithms-in-
python-john-canning/

Introduction to Algorithms & Data Structures 3: Learn


Linear Data Structures with Videos & Interview
Questions. 1st Edition Aremu Bolakale

https://ebookmeta.com/product/introduction-to-algorithms-data-
structures-3-learn-linear-data-structures-with-videos-interview-
questions-1st-edition-aremu-bolakale/
Algorithms and Data Structures for Massive Datasets
(MEAP v8) Dzejla Medjedovic

https://ebookmeta.com/product/algorithms-and-data-structures-for-
massive-datasets-meap-v8-dzejla-medjedovic/

Learning JavaScript Data Structures and Algorithms


Write complex and powerful JavaScript code using the
latest ECMAScript 3rd Edition Loiane Groner

https://ebookmeta.com/product/learning-javascript-data-
structures-and-algorithms-write-complex-and-powerful-javascript-
code-using-the-latest-ecmascript-3rd-edition-loiane-groner/

Data Structures and Algorithms Implementation through C


1st Edition Dr. Brijesh Bakariya.

https://ebookmeta.com/product/data-structures-and-algorithms-
implementation-through-c-1st-edition-dr-brijesh-bakariya/

Data Structures and Algorithms - Lecture Notes from


IISc 1st Edition Dr. Y. Narahari

https://ebookmeta.com/product/data-structures-and-algorithms-
lecture-notes-from-iisc-1st-edition-dr-y-narahari/

A Textbook of Data Structures and Algorithms, Volume 3


: Mastering Advanced Data Structures and Algorithm
Design Strategies 2nd Edition G. A. Vijayalakshmi Pai

https://ebookmeta.com/product/a-textbook-of-data-structures-and-
algorithms-volume-3-mastering-advanced-data-structures-and-
algorithm-design-strategies-2nd-edition-g-a-vijayalakshmi-pai/
Data Structures &
Algorithms using
Kotlin
Second Edition

By Hemant Jain

Подписывайся на топовый канал по Kotlin:

https://t.me/KotlinSenior
Data Structures & Algorithms using Kotlin
Hemant Jain

Copyright © Hemant Jain, 2022. All Right Reserved.

Hemant Jain asserts the moral right to be identified as the author of this work.

All rights reserved. No part of this publication may be reproduced, stored in or


introduced into a retrieval system, or transmitted, in any form, or by any means
(electrical, mechanical, photocopying, recording or otherwise) without the prior written
permission of the author, except in the case of very brief quotations embodied in
critical reviews and certain other non-commercial uses permitted by copyright law. Any
person who does any unauthorised act in relation to this publication may be liable to
criminal prosecution and civil claims for damages.
ACKNOWLEDGMENT

The author is very grateful to GOD ALMIGHTY for his grace and blessing.

I would like to express profound gratitude to my family and friends for their invaluable
encouragement, supervision and useful suggestions throughout this book writing work.
Their support and continuous guidance enable me to complete my work successfully.

Hemant Jain
Table of Contents
CHAPTER 0: ABOUT THIS BOOK...............................................................................................................................1
CHAPTER 1: ALGORITHMS ANALYSIS.......................................................................................................................2
CHAPTER 2: APPROACH TO SOLVE PROBLEMS......................................................................................................24
CHAPTER 3: ABSTRACT DATA TYPE........................................................................................................................29
CHAPTER 4: SORTING............................................................................................................................................43
CHAPTER 5: SEARCHING........................................................................................................................................82
CHAPTER 6: LINKED LIST......................................................................................................................................153
CHAPTER 7: STACK...............................................................................................................................................206
CHAPTER 8: QUEUE.............................................................................................................................................243
CHAPTER 9: TREE.................................................................................................................................................267
CHAPTER 10: PRIORITY QUEUE / HEAPS..............................................................................................................346
CHAPTER 11: HASH TABLE...................................................................................................................................381
CHAPTER 12: GRAPHS..........................................................................................................................................396
CHAPTER 13: STRING ALGORITHMS....................................................................................................................447
CHAPTER 14: ALGORITHM DESIGN TECHNIQUES................................................................................................467
CHAPTER 15: BRUTE FORCE ALGORITHM............................................................................................................470
CHAPTER 16: GREEDY ALGORITHM.....................................................................................................................475
CHAPTER 17: DIVIDE AND CONQUER..................................................................................................................492
CHAPTER 18: DYNAMIC PROGRAMMING............................................................................................................503
CHAPTER 19: BACKTRACKING..............................................................................................................................541
CHAPTER 20: COMPLEXITY THEORY.....................................................................................................................550
APPENDIX............................................................................................................................................................559
Table of Contents
CHAPTER 0: ABOUT THIS BOOK...............................................................................................................................1
WHAT THIS BOOK IS ABOUT........................................................................................................................................1
PREREQUISITES.........................................................................................................................................................1
WHO SHOULD TAKE THIS COURSE ?..............................................................................................................................1
CODE DOWNLOADS...................................................................................................................................................1
END....................................................................................................................................................................... 1
CHAPTER 1: ALGORITHMS ANALYSIS.......................................................................................................................2
INTRODUCTION........................................................................................................................................................2
ASYMPTOTIC ANALYSIS OR ASYMPTOTIC NOTATIONS.......................................................................................................2
BIG-O NOTATION.....................................................................................................................................................2
OMEGA-Ω NOTATION...............................................................................................................................................3
THETA-Θ NOTATION.................................................................................................................................................4
COMPLEXITY ANALYSIS OF ALGORITHMS........................................................................................................................4
GROWTH OF FUNCTIONS............................................................................................................................................5
DERIVING AN ALGORITHM'S RUNTIME FUNCTION...........................................................................................................7
TIME COMPLEXITY EXAMPLES.....................................................................................................................................7
RECURSIVE FUNCTION.............................................................................................................................................13
COMPLEXITIES FROM RECURRENCE RELATION..............................................................................................................18
MASTER THEOREM.................................................................................................................................................21
EXERCISE...............................................................................................................................................................23
CHAPTER 2: APPROACH TO SOLVE PROBLEMS......................................................................................................24
INTRODUCTION.......................................................................................................................................................24
CONSTRAINTS........................................................................................................................................................24
IDEA GENERATION..................................................................................................................................................25
COMPLEXITIES........................................................................................................................................................26
CODING................................................................................................................................................................27
TESTING................................................................................................................................................................27
EXAMPLE..............................................................................................................................................................28
SUMMARY.............................................................................................................................................................28
CHAPTER 3: ABSTRACT DATA TYPE........................................................................................................................29
ABSTRACT DATA TYPE (ADT)...................................................................................................................................29
DATA-STRUCTURE...................................................................................................................................................29
KOTLIN COLLECTION FRAMEWORK.............................................................................................................................30
ARRAY..................................................................................................................................................................30
LINKED LIST...........................................................................................................................................................31
STACK..................................................................................................................................................................33
QUEUE.................................................................................................................................................................34
TREE.................................................................................................................................................................... 35
BINARY SEARCH TREE (BST).....................................................................................................................................35
HEAP / PRIORITY QUEUE.........................................................................................................................................37
HASH TABLE..........................................................................................................................................................39
ENDNOTE..............................................................................................................................................................42
CHAPTER 4: SORTING............................................................................................................................................43
INTRODUCTION.......................................................................................................................................................43
TYPES OF SORTING..................................................................................................................................................43
COMPARISON FUNCTION..........................................................................................................................................43
BUBBLE SORT........................................................................................................................................................44
INSERTION SORT.....................................................................................................................................................46
SELECTION SORT.....................................................................................................................................................47
MERGE SORT.........................................................................................................................................................49
PROS AND CONS OF MERGE-SORT ALGORITHM............................................................................................................51
EXTERNAL SORT (EXTERNAL MERGE-SORT).................................................................................................................51
QUICK SORT..........................................................................................................................................................52
QUICK SELECT........................................................................................................................................................54
COUNTING SORT....................................................................................................................................................56
RADIX SORT...........................................................................................................................................................57
HEAP SORT...........................................................................................................................................................59
TREE SORTING.......................................................................................................................................................59
SHELL SORT...........................................................................................................................................................59
BUCKET SORT........................................................................................................................................................61
GENERALISED BUCKET SORT.....................................................................................................................................62
STABLE SORTING....................................................................................................................................................63
COMPARISONS OF THE VARIOUS SORTING ALGORITHMS..................................................................................................63
SELECTION OF BEST SORTING ALGORITHM...................................................................................................................64
PROBLEMS BASED ON SORTING..................................................................................................................................65
EXERCISE...............................................................................................................................................................80
CHAPTER 5: SEARCHING........................................................................................................................................82
INTRODUCTION.......................................................................................................................................................82
WHY SEARCHING?..................................................................................................................................................82
DIFFERENT SEARCHING ALGORITHMS..........................................................................................................................82
LINEAR SEARCH OR SEQUENTIAL SEARCH – UNSORTED INPUT.........................................................................................82
LINEAR SEARCH – SORTED........................................................................................................................................83
BINARY SEARCH......................................................................................................................................................84
BINARY SEARCH IMPLEMENTATION USING RECURSION....................................................................................................85
FIBONACCI SEARCH.................................................................................................................................................86
HOW IS SORTING USEFUL IN SELECTION ALGORITHMS?..................................................................................................87
PROBLEMS IN SEARCHING........................................................................................................................................88
EXERCISE.............................................................................................................................................................151
CHAPTER 6: LINKED LIST......................................................................................................................................153
INTRODUCTION.....................................................................................................................................................153
LINKED LIST.........................................................................................................................................................153
TYPES OF LINKED LIST............................................................................................................................................153
SINGLY LINKED LIST...............................................................................................................................................154
DOUBLY LINKED LIST.............................................................................................................................................178
CIRCULAR LINKED LIST...........................................................................................................................................189
DOUBLY CIRCULAR LIST..........................................................................................................................................197
SKIP LIST.............................................................................................................................................................201
USES OF LINKED LIST.............................................................................................................................................203
EXERCISE.............................................................................................................................................................204
CHAPTER 7: STACK...............................................................................................................................................206
INTRODUCTION.....................................................................................................................................................206
THE STACK ABSTRACT DATA TYPE............................................................................................................................206
STACK USING ARRAY..............................................................................................................................................207
STACK USING ARRAY WITH MEMORY MANAGEMENT....................................................................................................209
STACK USING LINKED LIST.......................................................................................................................................210
SYSTEM STACK AND FUNCTION CALLS.......................................................................................................................213
PROBLEMS IN STACK.............................................................................................................................................214
USES OF STACK....................................................................................................................................................241
EXERCISE.............................................................................................................................................................241
CHAPTER 8: QUEUE.............................................................................................................................................243
INTRODUCTION.....................................................................................................................................................243
THE QUEUE ABSTRACT DATA TYPE..........................................................................................................................243
QUEUE USING ARRAY............................................................................................................................................244
QUEUE USING CIRCULAR LINKED LIST.......................................................................................................................245
PROBLEMS IN QUEUE............................................................................................................................................248
USES OF QUEUE...................................................................................................................................................265
EXERCISE.............................................................................................................................................................265
CHAPTER 9: TREE.................................................................................................................................................267
INTRODUCTION.....................................................................................................................................................267
TERMINOLOGY IN TREE..........................................................................................................................................267
BINARY TREE.......................................................................................................................................................269
TYPES OF BINARY TREES.........................................................................................................................................270
PROBLEMS IN BINARY TREE....................................................................................................................................272
BINARY SEARCH TREE (BST)...................................................................................................................................292
PROBLEMS IN BINARY SEARCH TREE (BST)................................................................................................................292
SEGMENT TREE....................................................................................................................................................306
BINARY INDEX TREE / FENWICK TREE.......................................................................................................................310
AVL TREE...........................................................................................................................................................313
RED-BLACK TREE..................................................................................................................................................319
SPLAY TREE.........................................................................................................................................................331
B TREE...............................................................................................................................................................336
B+ TREE.............................................................................................................................................................341
THREADED BINARY TREE........................................................................................................................................342
USES OF TREES.....................................................................................................................................................343
EXERCISE.............................................................................................................................................................343
CHAPTER 10: PRIORITY QUEUE / HEAPS..............................................................................................................346
INTRODUCTION.....................................................................................................................................................346
TYPES OF HEAP....................................................................................................................................................347
HEAP ADT OPERATIONS........................................................................................................................................348
OPERATION ON HEAP............................................................................................................................................348
HEAP SORT.........................................................................................................................................................358
PROBLEMS IN HEAP..............................................................................................................................................363
BINOMIAL HEAP...................................................................................................................................................369
FIBONACCI HEAP..................................................................................................................................................374
USES OF HEAP.....................................................................................................................................................378
EXERCISE.............................................................................................................................................................379
CHAPTER 11: HASH TABLE...................................................................................................................................381
INTRODUCTION.....................................................................................................................................................381
HASH-TABLE........................................................................................................................................................381
HASHING WITH OPEN ADDRESSING..........................................................................................................................383
HASHING WITH SEPARATE CHAINING.........................................................................................................................387
SET IMPLEMENTATION OF KOTLIN COLLECTIONS.........................................................................................................389
DICTIONARY IMPLEMENTATION IN KOTLIN COLLECTION................................................................................................389
PROBLEMS IN HASHING.........................................................................................................................................390
USES OF HASH-TABLE...........................................................................................................................................393
EXERCISE.............................................................................................................................................................393
CHAPTER 12: GRAPHS..........................................................................................................................................396
INTRODUCTION.....................................................................................................................................................396
GRAPH TERMINOLOGY...........................................................................................................................................397
GRAPH REPRESENTATION.......................................................................................................................................401
GRAPH TRAVERSALS..............................................................................................................................................404
DEPTH FIRST TRAVERSAL........................................................................................................................................405
BREADTH FIRST TRAVERSAL....................................................................................................................................407
DFS & BFS BASED PROBLEMS................................................................................................................................410
MINIMUM SPANNING TREE (MST)..........................................................................................................................426
EULER PATH AND EULER CIRCUIT.............................................................................................................................431
SHORTEST PATH ALGORITHMS IN GRAPH..................................................................................................................433
HAMILTONIAN PATH.............................................................................................................................................441
HAMILTONIAN CIRCUIT..........................................................................................................................................443
TRAVELLING SALESMAN PROBLEM (TSP)..................................................................................................................444
USES OF GRAPH ALGORITHMS.................................................................................................................................446
EXERCISE.............................................................................................................................................................446
CHAPTER 13: STRING ALGORITHMS....................................................................................................................447
INTRODUCTION.....................................................................................................................................................447
STRING MATCHING...............................................................................................................................................447
DICTIONARY / SYMBOL TABLE.................................................................................................................................451
PROBLEMS IN STRING............................................................................................................................................458
EXERCISE.............................................................................................................................................................465
CHAPTER 14: ALGORITHM DESIGN TECHNIQUES................................................................................................467
INTRODUCTION.....................................................................................................................................................467
BRUTE FORCE ALGORITHM.....................................................................................................................................467
GREEDY ALGORITHM.............................................................................................................................................468
DIVIDE AND CONQUER...........................................................................................................................................468
DYNAMIC PROGRAMMING......................................................................................................................................468
BACKTRACKING....................................................................................................................................................469
CONCLUSION.......................................................................................................................................................469
CHAPTER 15: BRUTE FORCE ALGORITHM............................................................................................................470
INTRODUCTION.....................................................................................................................................................470
PROBLEMS IN BRUTE FORCE ALGORITHM..................................................................................................................470
CONCLUSION.......................................................................................................................................................474
CHAPTER 16: GREEDY ALGORITHM.....................................................................................................................475
INTRODUCTION.....................................................................................................................................................475
PROBLEMS ON GREEDY ALGORITHM.........................................................................................................................475
CONVEX-HULL PROBLEM........................................................................................................................................489
CONCLUSION.......................................................................................................................................................491
CHAPTER 17: DIVIDE AND CONQUER..................................................................................................................492
INTRODUCTION.....................................................................................................................................................492
GENERAL DIVIDE & CONQUER RECURRENCE..............................................................................................................493
PROBLEMS ON DIVIDE & CONQUER ALGORITHM........................................................................................................493
STRASSEN’S MATRIX MULTIPLICATION......................................................................................................................499
EXERCISE.............................................................................................................................................................501
CHAPTER 18: DYNAMIC PROGRAMMING............................................................................................................503
INTRODUCTION.....................................................................................................................................................503
DYNAMIC PROGRAMMING METHODS.......................................................................................................................504
TABULATION (BOTTOM-UP)...................................................................................................................................504
MEMOIZATION (TOP-DOWN).................................................................................................................................505
DYNAMIC PROGRAMMING PATTERNS.......................................................................................................................506
TYPE 1 : MAXIMUM / MINIMUM COST TO REACH SOME STATE PROBLEMS......................................................................508
TYPE 2 : DISTINCT WAYS TO REACH SOME STATE PROBLEMS.........................................................................................513
TYPE 3 : MERGING INTERVALS TO GET OPTIMAL SOLUTION PROBLEMS............................................................................517
TYPE 4 : SUBSEQUENCE, SUBSTRINGS AND STRING MANIPULATION PROBLEMS................................................................525
TYPE 5 : STATE SELECTION KIND OF PROBLEMS..........................................................................................................534
EXERCISE.............................................................................................................................................................539
CHAPTER 19: BACKTRACKING..............................................................................................................................541
INTRODUCTION.....................................................................................................................................................541
PROBLEMS ON BACKTRACKING ALGORITHM...............................................................................................................542
EXERCISE.............................................................................................................................................................549
CHAPTER 20: COMPLEXITY THEORY.....................................................................................................................550
INTRODUCTION.....................................................................................................................................................550
DECISION PROBLEM..............................................................................................................................................550
COMPLEXITY CLASSES............................................................................................................................................550
CLASS P PROBLEMS..............................................................................................................................................551
CLASS NP PROBLEMS............................................................................................................................................552
REDUCTION.........................................................................................................................................................553
NP HARD PROBLEMS............................................................................................................................................554
NP-COMPLETE PROBLEMS.....................................................................................................................................554
CLASS CO-NP PROBLEMS.......................................................................................................................................558
END NOTE..........................................................................................................................................................558
APPENDIX............................................................................................................................................................559
CHAPTER 0: ABOUT THIS BOOK

What this book is about


This book will take you through basic to advanced data structures and algorithms. Data structures
define how data is arranged in memory for fast and efficient access. An algorithm is a finite set of
unambiguous instructions to solve problems by manipulating various data structures.

Designing an efficient algorithm is a very important skill that all software companies pursue. Most of
the interviews for software companies are focused on knowledge of data structures and algorithms.
Apart from knowing a programming language, you also need to have a good command of these key
computer fundamentals to not only crack the interview but also excel in your jobs as a software
engineer.

Prerequisites
You should have a working knowledge of Kotlin programming language. You are not an expert in the
Kotlin language, but you are well familiar with concepts of classes, functions, references, and
recursion.

Who should take this course ?


If you’re planning to pursue a career in the Kotlin language, get better at it and apply for a job, this
book is best for you. If you can put a reasonable amount of time into this book by reading various
concepts and solving the various problems on data structures, you can become an expert in data
structures and algorithms.

Code downloads
You can download the code of solved examples in the book from the author's GitHub repositories at
https://GitHub.com/Hemant-Jain-Author/. Hear the author had solved examples in various
programming languages like C, C++, C#, Java, Python, JavaScript, Swift, GoLang, Ruby, etc.

End
It is highly recommended that you should read the problem statement, try to solve the problems by
yourself and then only you should look into the solution to find the approach of this book. Practising
more and more problems will increase your thinking capacity, and you will be able to handle unseen
problems in an interview. We recommend you to practice all the problems given in this book, then
solve more and more problems from online resources like www.topcoder.com, www.careercup.com,
https://leetcode.com/ etc.

1
CHAPTER 1: ALGORITHMS ANALYSIS

Introduction
An Algorithm is a finite set of unambiguous steps or instructions to solve a given problem. Knowledge
of algorithms helps us to get desired results faster by applying the appropriate algorithm. We learn by
experience. With experience, it becomes easy to solve new problems. By looking into various problem-
solving algorithms or techniques, we begin to develop a pattern that will help us in solving similar
problems.

The properties of an algorithm are:


1. It takes zero or more inputs.
2. It should produce one or more output.
3. It should be Deterministic. It produces the same output if the same input is provided again.
4. It should be Correct. It should be correct and able to process all the given inputs and provide the
correct output.
5. It should Terminate in a finite time.
6. It should be Efficient. The algorithm should be efficient in solving problems.

The complexity of an algorithm is the amount of Time or Space required by the algorithm to process
the input and produce the output.

There are two types of Complexity:


1. First is Time-Complexity, how much time is required by an algorithm to produce output for an
input of size ‘n’.
Time-Complexity is represented by function T(n) - time required versus the input size n.
2. Second is Space-Complexity, how much RAM or memory that an algorithm is going to
consume to produce output for an input of size ‘n’.
Space-Complexity is represented by function S(n) - memory used versus the input size n.

Asymptotic Analysis or Asymptotic Notations


Calculating the running time of any algorithm in mathematical units of computation is known as
Asymptotic Analysis. The efficiency of algorithms is calculated using asymptotic analysis, independent
of the given data set or programming language.

In most cases, we are interested in the order of growth of the algorithm instead of the exact time
required for running an algorithm. This time is also known as Asymptotic Running Time.

Big-O Notation
Definition: “f(n) is big-O of g(n)” or f(n) = O(g(n)), if there are two +ve constants c and n1 such that f(n)
≤ c g(n) for all n ≥ n1,

2
Chapter 1: Algorithms Analysis Big-O Notation

In other words, c g(n) is an upper bound for f(n) for all n ≥ n0. The function f(n) growth is slower than c
g(n). For a sufficiently large value of input n, the (c.g(n)) will always be greater than f(n).

Example : n^2 + n = O(n^2)

Omega-Ω Notation
Definition: “f(n) is omega of g(n)” or f(n)=Ω(g(n)) if there are two +ve constants c and n1 such that c
g(n) ≤ f(n) for all n ≥ n1
In other words, c g(n) is the lower bound for f(n). Function f(n) growth is faster than c g(n)

Find relationship of f(n) = nc and g(n) = cn


f(n) = Ω(g(n))

3
Chapter 1: Algorithms Analysis Theta-Θ Notation

Theta-Θ Notation
Definition: “f(n) is theta of g(n).” or f(n) = Θ(g(n)) if there are three +ve constants c1, c2 and n1 such
that c1 g(n) ≤ f(n) ≤ c2 g(n) for all n ≥ n1

Function g(n) is an asymptotically tight bound on f(n). Function f(n) grows at the same rate as g(n).

Example: n^3 + n^2 + n = Ɵ(n^3)


Example: n^2 + n = Ɵ(n^2)
Find relationship of f(n) = 2n^2 + n and g(n) = n^2
f(n) = O(g(n))
f(n) = Ɵ(g(n))
f(n) = Ω(g(n))

Complexity analysis of algorithms


The complexity of an algorithm are analysed in three categories:
➢ Worst-Case Complexity: The worst-case complexity represents the maximum number of steps
required to execute an algorithm. It provides us with the upper bound of an algorithm. Usually, we
use this complexity to judge algorithm performance.
➢ Best-Case Complexity:The best-case complexity represents the minimum number of steps
required to execute an algorithm. It provides us with the lower bound of an algorithm.
➢ Average-Case Complexity:The average-case complexity represents the average number of steps
required to execute an algorithm. We take the average of the steps executed in all the cases to
calculate average-case complexity.

Note: Worst-case complexity is used to find the guarantee in how much time some particular algorithm
will finish. This is the most important time complexity. If the type of complexity is not mentioned, then
always consider Worst-Case time complexity.

4
Chapter 1: Algorithms Analysis Growth of functions

Growth of functions
Let’s look at these growth rates of various functions. The size of the input is n.

Constant Time, O(1)


An algorithm is said to run in constant time if the output is produced in constant time, regardless of the
input size.

Examples:
1 Accessing an nth element of an Array
2 Push and pop of a stack.
3 Add and remove from a queue.
4 Accessing an element of Hash-Table.

Linear Time, O(n)


An algorithm is said to run in linear time if the execution time of the algorithm is directly proportional to
the input size.

Examples:
1 Array operations like search element, find min, find max etc.
2 Linked list operations like traversal, find min, find max etc.

Note: If we need to traverse all the nodes of a data structure for some task, then complexity cant be
less than O(n)

Logarithmic Time, O(log(n))


An algorithm is said to run in logarithmic time if the execution time of the algorithm is proportional to
the logarithm of the input size. In each step of an algorithm, a significant portion (e.g. half portion) of
the input is pruned/rejected out without traversing it.

An example is the Binary search algorithm. We will read about this algorithm in this book.

n.log(n) Time, O(n.log(n))


An algorithm is said to run in n*log(n) time if the execution time of an algorithm is proportional to the
product of input size and logarithm of the input size. In these algorithms, each time the input is divided
into half (or some proportion) and each portion is processed independently.

Examples are Merge-Sort, Quick-Sort (average case), Heap-Sort etc.

5
Chapter 1: Algorithms Analysis Growth of functions

Quadratic Time, O(n^2)


An algorithm is said to run in quadratic time if the execution time of an algorithm is proportional to the
square of the input size. In these algorithms, each element is compared with all the other elements.
Examples are Bubble-Sort, Selection-Sort, Insertion-Sort

Exponential Time O(2^n)


In these algorithms, all possible subsets of elements of input data are generated. Its common example
is the power set.

Factorial Time O(n!)


In these algorithms, all possible permutations of the elements of input data are generated. Finding
permutations is a common example of factorial time.

A list of commonly occurring algorithm Time Complexity in increasing order:


Name Notation
Constant O(1)
Logarithmic O(log(n))
Linear O(n)
N.Log(N) O(n.log(n))
Quadratic O(n^2)
Polynomial O(n^c) c is a constant & c>1
Exponential O(c^m) c is a constant & c>1
Factorial O(n!)
N-Power-N O(n^n)

The time taken by certain algorithms to run varies dramatically with the size of the input. Some
algorithms take minutes or even seconds to run on huge input, whereas others may take days to
complete their execution. To understand how the rate of growth changes with the size of the input in
different functions, the following table presents the approximate number of steps required to run an
algorithm:

Function Growth Rate (Approximate)


N O(1) O(log(n)) O(n) O(n.log(n)) O(n^2) O(n^3) O(2^n)

10 1 3 10 30 102 103 103


102 1 6 102 6x102 104 106 1030
103 1 9 103 9x103 106 109 10300
104 1 13 104 13x104 108 1012 103000
105 1 16 105 16x105 1010 1015 1030000
106 1 19 106 19x106 1012 1018 10300000

6
Chapter 1: Algorithms Analysis Deriving an Algorithm's Runtime Function

Deriving an Algorithm's Runtime Function

Constants
If any line of code is a statement with basic operations, e.g., comparisons, assignments, or reading a
variable, they take constant time each. Thus, the time complexity of each statement is O(1).

Loops
In loop, a repetition of a particular code for n times, where n is the size of the loop. Every statement
inside the loop has a runtime of O(1). The running time of a loop is a product of the running time of the
statement inside a loop and the number of iterations in the loop. Time Complexity is O(n)

Nested Loops
The running time of a nested loops is a product of the running time of the statements inside the loop
multiplied by a product of the size of all the loops. Time Complexity is O(n^c). Where c is the number of
loops. For two loops, it will be O(n^2)

Consecutive Statements
In this case, we add the running time of all the consecutive lines of code.

If-Else Statement
In this case, either “if” will run or “else” will run. So, the block with larger runtime will be considered.

Logarithmic Statement
In this case, each iteration will cut the input size into b pieces and consider one of the pieces for the
next iteration. Time complexity in this situation will be O(logb(n)).

Time Complexity Examples


Example 1.1: Single loop
fun fun1(n: Int): Int {
var m = 0
for (i in 0 until n) {
m += 1
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n):: " + fun1(100))
}

Output:
N = 100, Number of instructions in O(n)::100

Time Complexity: O(n), single loop takes linear time.

7
Chapter 1: Algorithms Analysis Time Complexity Examples

Example 1.2: Nested loops


fun fun2(n: Int): Int {
var m = 0
for (i in 0 until n) {
for (j in 0 until n) {
m += 1
}
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n^2):: " + fun2(100))
}

Output:
N = 100, Number of instructions in O(n^2)::10000

Time Complexity: O(n^2), two nested for loop, takes quadratic time. Both the “for” loop is executed n
number of times, so the internal statement executed n^2 number of times.

Example 1.3: Triple nested loops


fun fun3(n: Int): Int {
var m = 0
for (i in 0 until n) {
for (j in 0 until n) {
for (k in 0 until n) {
m += 1
}
}
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n^3):: " + fun3(100))
}

Output:
N = 100, Number of instructions in O(n^3)::1000000

Time Complexity: O(n^3), All the three nested loops run for n number of iterations. So the statement
inside the innermost loop will run for n^3 number of times.

Example 1.4: Triple nested loops


fun fun4(n: Int): Int {
var m = 0
for (i in 0 until n) {
for (j in i until n) {
for (k in j+1 until n) {
m += 1
}
}

8
Chapter 1: Algorithms Analysis Time Complexity Examples

}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n^3):: " + fun4(100))
}

Output:
N = 100, Number of instructions in O(n^3)::166650

Time Complexity: Three nested loops each run for n number of times. The innermost statement will
run for n^3 number of times. Time complexity is O(n^3)

Example 1.5: Arithmetic Progression


fun fun5(n: Int): Int {
var m = 0
for (i in 0 until n) {
for (j in 0 until i) {
m += 1
}
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n^2):: " + fun5(100))
}

Output:
N = 100, Number of instructions in O(n^2)::4950

Time Complexity: Statement inside inner loop executes for 1 time in first iteration then 2 times then 3
times and so on for n iterations. Total number of times the inner statement is executed = 1 + 2+ 3 +…..
+ n. This series is an arithmetic progression, which sums to n(n+1)/2. So the final time complexity is
O(n(n+1)/2) ignoring the constant factors, time complexity will be O(n^2).

Example 1.6: Arithmetic Progression


fun fun6(n: Int): Int {
var m = 0
for (i in 0 until n) {
for (j in i downTo 0) {
m += 1
}
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n^2):: " + fun6(100))
}

9
Chapter 1: Algorithms Analysis Time Complexity Examples

Output:
N = 100, Number of instructions in O(n^2)::5050

Time Complexity: The inner loop will run 1 time in the first iteration, then 2 times in the second
iteration and so on. It is an arithmetic progression, so time complexity will be O(n^2).

Example 1.7: Nested loops / Geometric Progression


fun fun7(n: Int): Int {
var m = 0
var i = n
while (i > 0) {
var j = 0
while (j < i) {
m += 1
j++
}
i /= 2
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n):: " + fun7(100))
}

Output:
N = 100, Number of instructions in O(n)::197

For nested loops, look for inner loop iterations. Time complexity will be calculated by looking into the
inner loop. First, it will run for n number of times then n/2 and so on. (n+n/2 +n/4+n/8+n/16 ……)
Time Complexity: O(n)

Example 1.8: Geometric Progression


fun fun8(n: Int): Int {
var m = 0
var i = 1
while (i <= n) {
var j = 0
while (j <= i) {
m += 1
j++
}
i *= 2
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n):: " + fun8(100))
}

10
Chapter 1: Algorithms Analysis Time Complexity Examples

Output:
N = 100, Number of instructions in O(n)::134

Time Complexity: The inner loop will run for 1, 2, 4, 8,… n times in successive iteration of the outer
loop. T(n) = O(1+ 2+ 4+ ….+n/2+n) = O(n)

Example 1.9: Double the iteration variable


fun fun9(n: Int): Int {
var m = 0
var i = 1
while (i < n) {
m += 1
i = i * 2
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(log(n)):: " + fun9(100))
}

Output:
N = 100, Number of inst in O(log(n))::7

In each iteration, i value is doubled. So the value of i after k iterations will be 2^k.
2^k = n ...Will be the condition at the time of exit.
log(2^k) = log(n) ….Taking log both sides.
k = log(n)
Time Complexity: O(log(n))

Example 1.10: Half the iteration variable


fun fun10(n: Int): Int {
var m = 0
var i = n
while (i > 0) {
m += 1
i = i / 2
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(log(n)):: " + fun10(100))
}

Output:
N = 100, Number of instructions in O(log(n))::7

The initial value of i is n. In each iteration, the value of “i” is halved.


So the value of i after k iterations will be n/2^k.

11
Chapter 1: Algorithms Analysis Time Complexity Examples

At the time of exit, n/ 2^k = 1


n = 2^k // At the time of exit.
log(2^k) = log(n) // Taking log both sides.
k = log(n)
Time Complexity: O(log(n))

Example 1.11: Consecutive Statements


fun fun11(n: Int): Int {
var m = 0
for (i in 0 until n) {
for (j in 0 until n) {
m += 1
}
}

for (i in 0 until n) {
for (k in 0 until n) {
m += 1
}
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n^2):: " + fun11(100))
}

Output:
N = 100, Number of instructions in O(n^2)::20000

These two groups of loops are consecutive, so their complexity will add up to form the final complexity
of the program.
Time Complexity: O(n^2) + O(n^2) = O(n^2)

Example 1.12:
fun fun12(n: Int): Int {
var m = 0
for (i in 0 until n) {
for (j in 0 until Math.sqrt(n.toDouble()).toInt()) {
m += 1
}
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n^(3/2)):: " + fun12(100))
}

Output:
N = 100, Number of instructions in O(n^(3/2))::1000

12
Chapter 1: Algorithms Analysis Time Complexity Examples

Time Complexity: Inner loop always runs for √n times. O(n * √n ) = O(n^3/2)

Example 1.13: Multiple loops in O(n)


fun fun13(n: Int): Int {
var m = 0
var i = 0
var j = 0
while (i < n) {
while (j < n) {
m += 1
j++
}
i++
}
return m
}

// Testing code.
fun main() {
println("N = 100, Number of instructions O(n):: " + fun13(100))
}

Output:
N = 100, Number of instructions in O(n)::100

Time Complexity: In this example, j is not initialised for every iteration. For i=0, the loop of j executes
completely, making the value of j as n. But for the remaining values of i, the loop of j does not execute.
So the time complexity in this case is O(n).

Recursive Function
Recursion: A recursive function is a function that calls itself, directly or indirectly. A recursive method
consists of two parts: Termination Condition and Body (which includes recursive expansion).
1 Termination Condition: A recursive method always contains one or more terminating conditions.
A condition in which a recursive method processes a simple case and does not call itself.
2 Body (including recursive expansion): The main logic of the recursive method is contained in the
body of the method. It also contains the recursion expansion statement that, in turn, calls the
method itself.

Three important properties of the recursive algorithm are:


1. A recursive algorithm must have a termination condition.
2. A recursive algorithm must change its state, and shift state towards the termination condition.
3. A recursive algorithm must be capable of calling itself.

Note: The speed of a recursive program is slower because of stack overheads. If the same problem
can be solved using an iterative solution (using loops), then we should prefer an iterative solution in
place of recursion to avoid stack overhead.

Note: Without termination conditions, the recursive method may run forever and consume full-stack
memory.

13
Chapter 1: Algorithms Analysis Recursive Function

Factorial
Problem: Given a value N find N!. Where N! = N* (N-1)…. 2*1. Use recursion to solve the problem.

Example 1. 14: Factorial Calculation.


fun factorial(i:Int):Int {
// Termination Condition
if (i <= 1) {
return 1
}
// Body, Recursive Expansion
return i * factorial(i - 1)
}

// Testing code.
fun main() {
println("Factorial:" + factorial(5))
}

Output:
Factorial:120

Analysis: We calculate factorial(i) as i*factorial(i-1) recursively.

Function F(n) calls F(n-1)


T(n) = T(n-1) + 1
T(n-1) = T(n-2) + 1
T(n-2) = T(n-3) + 1
T(n) = T(n-1) + 1 = (T(n-2) + 1) + 1 = T(n-2) + 2 = (T(n-3) + 1) + 2 = T(n-3) + 3
Similarly, for kth term T(n) = T(n-k) + k
for base case (n-k) = 1 or n – 1 = k
T(n) = T(1) + n -1 = n
Time Complexity is O(n)

Print Base 16 Integers


Problem: Given an integer in decimal form, print its hexadecimal form. Use recursion to solve the
problem.

Example 1.15: Generic print to some specific base method.


fun printInt(numberInput:Int, base:Int) {
var conversion = "0123456789ABCDEF"
var number = numberInput
var digit = conversion[number % base]

number = number / base


if (number != 0) {
printInt(number, base)
}
print(digit)
}

14
Chapter 1: Algorithms Analysis Recursive Function

// Testing code.
fun main() {
var i = 1000
printInt(i, 16)
}

Output:
3E8

Analysis:
1 The base value is provided along with the number in the function parameter.
2 The remainder of the number is calculated and stored in digits.
3 If the number is greater than the base, then the number divided by the base is passed recursively
as an argument to the print() method.
4 The number will be printed higher-ordered first, then the lower order digits.

Time Complexity is O(n), Where n is the number of digits.

Tower of Hanoi
Problem: In the Tower of Hanoi, we are given three rods and N number of disks, initially all the disks
are added to the first rod (the leftmost one) such that no smaller disk is under the larger one. The
objective is to transfer the entire stack of disks from the first tower to the third tower (the rightmost
one), moving only one disk at a time. Moving a larger disk onto a smaller one is not allowed.

Solution: If we want to transfer N disks from source to destination tower. Let's consider the bottom-
most disk, it is the largest disk so can not be placed to any other tower except the destination tower.
Also, all the disks above the largest disk need to be placed in the temporary tower, then only the
largest disk can be moved to the destination tower. So we move N-1 disks from source to temporary
tower and then move the lowest Nth disk from source to destination. Then we will move N-1 disks from
the temporary tower to the destination tower.

Example 1.16:
fun towerOfHanoi(num:Int, src:Char, dst:Char, temp:Char) {
if (num < 1) {
return
}

15
Chapter 1: Algorithms Analysis Recursive Function

towerOfHanoi(num - 1, src, temp, dst)


println("Move " + num + " disk from peg " + src + " to peg " + dst)
towerOfHanoi(num - 1, temp, dst, src)
}

// Testing code.
fun main() {
var num = 3
println("Moves involved in the Tower of Hanoi are:")
towerOfHanoi(num, 'A', 'C', 'B')
}

Output:
Moves involved in the Tower of Hanoi are:
Move 1 disk from peg A to peg C
Move 2 disk from peg A to peg B
Move 1 disk from peg C to peg B
Move 3 disk from peg A to peg C
Move 1 disk from peg B to peg A
Move 2 disk from peg B to peg C
Move 1 disk from peg A to peg C

Analysis:
Recurrence Relation: T(n) = 1 + 2T(n-1)
T(n-1) = 1 + 2*T(n-2)
T(n-2) = 1 + 2*T(n-3)
T(n) = 1 + 2*( 1 + 2*T(n-2)) = 1 + 2 + 4*T(n-2) = 1 + 2 + 4*( 1 + 2*(T(n-3)) = 1 + 2+ 22+ 23T(n-3)
= 1 + 2 + 22 …. + 2nT(0) = 1(2n+1 – 1)/2 – 1 = 2n+1 – 1 // Geometric progression sum
Time complexity will be O(2^n) ignoring the constants.

Greatest Common D ivisor (GCD)


Problem: Find the greatest common divisor of two numbers using recursion.

Solution: There are many ways to find the greatest common divisor (GCD). We are using Euclid’s
algorithm to find the GCD. The following are steps of Euclid’s algorithm:
1. If n = 0 then GCD(n, m) = m, and this is a termination condition.
2. If m = 0 then GCD(n, m) = n, and this is a termination condition.
3. Write n in the form of a quotient remainder n = mq + q. q is the quotient, and r is the remainder.
4. Since GCD(n, m) = GCD(m, r), use the Euclidean Algorithm to find GCD(m , r).

Example 1.17:
fun gcd(m:Int, n:Int):Int {
if (m == 0) {
return n
}

if (n == 0) {
return m
}

16
Chapter 1: Algorithms Analysis Recursive Function

return gcd(n, m % n)
}

// Testing code.
fun main() {
println("Gcd is:: " + gcd(5, 2));
}

Output:
Gcd is:: 1

Time-Complexity: O(Max(log(m), log(n))), Each step input is reduced by nearly half or more.

Fibonacci number
Problem: Given N, find the Nth number in the Fibonacci series.

Solution: Fibonacci numbers are calculated by adding the sum of the previous two numbers.
Example 1.18:
fun fibonacci(n:Int):Int {
if (n <= 1) {
return n
}

return fibonacci(n-1) + fibonacci(n-2)


}

// Testing code.
fun main() {
println(fibonacci(10));
}

Output:
55

Analysis: Recurrence Relation: T(n) = 1 + T(n-1) + T(n-2)


T(n) = 1 + 2T(n-1) // Approximately
T(n) = 1 + 2T(n-1)
T(n-1) = 1 + 2T(n-2)
T(n-2) = 1 + 2T(n-3)
T(n) = 1 + 2( 1 + 2T(n-2)) = 1 + 2 + 4T(n-2) = 1 + 2 + 4 ( 1 + 2(T(n-3)) = 1 + 2+ 22+ 23T(n-3)
= 1 + 2 + 22 …. + 2nT(0) = 1(2n+1 – 1)/2 – 1 = 2n+1 – 1 // Geometric progression sum
Time complexity is O(2^n), ignoring the constants.

Note: - There is an inefficiency in the solution. We will look for a better solution in the coming chapters.

17
Chapter 1: Algorithms Analysis Complexities from Recurrence Relation

Complexities from Recurrence Relation


Example 1.19: Find complexity of the function with the following recurrence relation.
T(n) =
{T (n−1 1)+1if n=0if n> 0
Solution:
T(n) = T(n-1) + 1
T(n-1) = T(n-2) + 1
T(n-2) = T(n-3) + 1
T(n) = ( T(n-2) + 1) + 1 = T(n-2) + 2 = (T(n-3) + 1) + 2 = T(n-3) + 3
T(n) = T(n-k) + k
base case when T(0) = 1, n-k = 0 => n = k
T(n) = T(0) + n = 1 + n
Time Complexity is O(n)

Example 1.20: Find complexity of the function with the following recurrence relation.

T(n) =
{T ( n−11)+nif n=0if n>0
Solution:
T(n) = T(n-1) + n
T(n-1) = T(n-2) + n
T(n-2) = T(n-3) + n
T(n) = ( T(n-2) + n) + n = T(n-2) + 2n = (T(n-3) + n) + 2n = T(n-3) + 3n
T(n) = T(n-k) + kn
base case when T(0) = 1, n-k = 0 => n = k
T(n) = T(0) + n*n = 1 + n^2
Time Complexity is O(n^2)

Example 1.21: Find complexity of the function with the following recurrence relation.

T(n) =
{1T (n− 1)+ log (n) ififn=0
n>0

Solution:
T(n) = T(n-1) + log(n)
T(n-1) = T(n-2) + log(n) // for simplicity make log(n-1) as log(n)
T(n-2) = T(n-3) + log(n)
T(n) = ( T(n-2) + log(n)) + log(n) = T(n-2) + 2log(n) = (T(n-3) + log(n)) + 2log(n) = T(n-3) + 3log(n)

18
Chapter 1: Algorithms Analysis Complexities from Recurrence Relation

T(n) = T(n-k) + klog(n)


base case when T(0) = 1, n-k = 0 => n = k
T(n) = T(0) + n*log(n) = 1 + n.log(n)
Time Complexity is O(n.log(n))

Recurrence Relation Time-Complexity


T(n) = T(n-1) + 1 O(n)
T(n) = T(n-1) + n O(n^2)
T(n) = T(n-1) + log(n) O(n.log(n))
T(n) = T(n-c) + 1 , c is a constant O(n) , complicity is not changed by c.
T(n) = T(n-c) + b , c is a constant and b is a polynomial O(n*b), generalised above 4 cases.

Example 1.22: Find complexity of the function with the following recurrence relation.

T(n) =
{T (n/12)+nif n=1if n>1
Solution:
T(n) = T(n/2) + n
T(n/2) = T(n/22) + (n/2) // substituting n as n/2
T(n/22) = T(n/23) + (n/22) // substituting n as n/22
T(n) = ( T(n/22) + (n/2) ) + n = T(n/22) + n/2 + n
T(n) = ( T(n/23) + (n/22) ) + n/2 + n = T(n/23) + n/22 + n/2 + n
T(n) = T(n/2^k) + n/2^(k-1) + …… + n/22 + n/2 + n
base case when n = 2^k,
T(n) = T(1) + n/2^k-1 + …… + n/22 + n/2 + n
T(n) = T(1) + n * ( 1/2^(k-1) + …… + 1/22 + 1/2 + 1 )
T(n) = 1 + n*2
Time Complexity is O(n)

Example 1.23: Find complexity of the function with the following recurrence relation.

T(n) =
{2∗
1 if n=1
T (n/2)+ n if n>1

Solution:
T(n) = 2 T(n/2) + n
T(n/2) = 2 T(n/22) + (n/2) // substituting n as n/2
T(n/22) = 2 T(n/23) + (n/22) // substituting n as n/22

19
Chapter 1: Algorithms Analysis Complexities from Recurrence Relation

T(n) = 2 ( 2 T(n/22) + (n/2) ) + n = 22 T(n/22) + 2n


T(n) = 22 ( 2 T(n/23) + (n/22) ) + 2n = 23T(n/23) + 3n
T(n) = 2^k.T(n/2^k) + kn
base case when n = 2^k,
k = log(n)
T(n) = n*T(1) + k*n
T(n) = n + k*n = n + n*log(n)
Time Complexity is O(n.log(n))

Example 1.24: Find complexity of the function with the following recurrence relation.

T(n) =
{12∗ T (n −1)+1 if n=0
if n>0

Solution:
T(n) = 2 T(n-1) + 1
T(n-1) = 2T(n-2) + 1
T(n-2) = 2T(n-3) + 1
T(n) = 2( 2T(n-2) + 1) + 1 = 22T(n-2) + 2 + 1 = 22 (2T(n-3) + 1) + 2 + 1 = 23T(n-3) + 22 + 2 + 1
T(n) = 2kT(n-k) + 2(k-1) + ….. + 22 + 2 + 1
base case when T(0) = 1, n-k = 0 => n = k
T(n) = 2nT(0) + 2(n-1) + ….. + 22 + 2 + 1 = 2n + 2(n-1) + ….. + 22 + 2 + 1 = 2(n+1) – 1 // GP
Time Complexity O(2^n)

Example 1.25: Find complexity of the function with the following recurrence relation.

T(n) =
{T (√1 n)+1 if n ≤2
if n>2

Solution:
T(n) = T(n1/2) + 1
T(n1/2) = T( n1/4) + 1
T(n1/4) = T( n1/8) + 1
T(n) = T(n1/2) + 1 = (T( n1/4) + 1) + 1 = T( n1/4) + 2 = (T( n1/8) + 1) + 2 = T( n1/8) + 3
T(n) = T(n1/2k) + k
for base case n,1/2k = 2
(1/(2^k) ) * log(n)= log2 = 1 // taking log.
log(n) = 2^k
log(log(n)) = k log(2) = k // taking log again.
Time Complexity is O(log(log(n)))

20
Chapter 1: Algorithms Analysis Master Theorem

Master Theorem
The master theorem solves recurrence relations of the form: T(n) = a T(n/b) + f(n), Where a≥1 and
b>1. In this relation, “n” is the size of the input. "a" is the number of sub-problems in the recursion.
“n/b” is the size of each sub-problem. "f(n)" is the cost of the division of the problem into sub-problems
and merging the individual solutions of the sub-problems into the solution.

It is possible to determine an asymptotic tight bound in these three cases:


logba-ε logba
Case 1: When f(n) = O(n ) and constant Є > 1, then the final time complexity is T(n) = O(n )

Case 2: When f(n) = Θ(n


logba.
logkn) and constant k ≥ 0, then the final time complexity is

T(n) = Θ(n
logba. k+1
log n)
logba+ε
Case 3: When f(n) = Ω(n ) and constant Є > 1, then the final time complexity is T(n) = Θ(f(n))

Master theorem flow diagram

Example 1.26: In the case of Merge Sort. Its time complexity is T(n) = 2 T(n/2) + n
Solution: In this example, a and b both equal to 2. So, logba = log22 = 1.
Which means, f(n) = n = Θ(nlog22log0n). That means case 2 is applied and T(n) = Θ(nlog22log0+1n).
So, its final time complexity will be T(n) = Θ(n.log(n))

Example 1.27: It is the case of Binary Search, Its time complexity is T(n) = T(n/2) + 1
Solution: In this example, a is equal to 1 and b is equal to 2. So, logba = log21 = 0
Which means, f(n) = 1 = Θ(nlog21log0n). That means case 2 is applied and T(n) = Θ(nlog21log0+1n).
So, its final time complexity will be T(n) = Θ(log(n))

21
Chapter 1: Algorithms Analysis Master Theorem

Example 1.28: In the case of binary tree traversal, Its time complexity is T(n) = 2T(n/2) + 1
Solution: In this example, a is equal to 2 and b is also equal to 2. So, logba = log22 = 1
Which means, f(n) =1 = O(nlog22 - 1). That means case 1 is applied and T(n) = Θ(nlog22).
So, its final time complexity will be T(n) = Θ(n)

Example 1.29: T(n) = 2 T(n/2) + n^2


Solution: In this example, a is equal to 2 and b is also equal to 2. So, logba = log22 = 1
Which means, f(n) = n^2 = Ω(nlog22 + 1). That means case 3 is applied and T(n) = Θ(f(n)).
So, its final time complexity will be T(n) = Θ(n^2)

Example 1.30: T(n) = 4 T(n/2) + n^2


Solution: In this example, a is equal to 4 and b is equal to 2. So, logba = log24 = 2.
Which means, f(n) = n^2 = Θ(nlog24log0n). That means case 2 is applied and T(n) = Θ(nlog24log0+1n).
So, its final time complexity will be T(n) = Θ(n^2 * log n)

Example 1.31: T(n) = T(n/2) + 2n


Solution: In this example, a is equal to 1 and b is equal to 2. So, logba = log21 = 0
Which means, f(n) = 2n = Ω(nlog21 + 1). That means case 3 is applied and T(n) = Θ(f(n)).
So, its final time complexity will be T(n)= Θ(n)

Example 1.32: T (n) = 16T (n/4) + n


Solution: In this example, a is equal to 16 and b is equal to 4. So, logba = log416 = 2.
Which means, f(n) = n = O(nlog416 - 1). That means case 1 is applied and T(n) = Θ(nlog416).
So, its final time complexity will be T(n)= Θ(n^2)

Example 1.33: T (n) = 2T (n/2) + n log n


Solution: In this example, a is equal to 2 and b is also equal to 2. So, logba = log22 = 1
Which means, f(n) = n.log(n) = Θ(nlog22log1n ). That means case 2 is applied and T(n) = Θ(nlog22log1+1n).
So, its final time complexity will be T(n) = Θ(n log2(n))

Example 1.34: T(n) = 2 T(n/4) + n0.5


Solution: In this example, a is equal to 2 and b is equal to 4. So, logba = log42 = 0.5
Which means, f(n) = n0.5= Θ(nlog42log0n). That means case 2 is applied and T(n) = Θ(nlog42log0+1n).
So, its final time complexity will be T(n) = Θ(n0.5 log(n))

22
Chapter 1: Algorithms Analysis Master Theorem

Example 1.35: T(n) = 2 T(n/4) + n0.49


Solution: In this example, a is equal to 2 and b is equal to 4. So, logba = log42 = 0.5
Which means, f(n) = n0.49 = O(nlog42 – 0.01). That means case 1 is applied and T(n) = Θ(nlog42).
So its final time complexity will be T(n) = Θ(n0.5)

Example 1.36: T (n) = 3T (n/3) + √ n


Solution: In this example, a is equal to 3 and b is also equal to 3. So, logba = log33 = 1
Which means, f(n) = n = O(nlog33 - 1/2). That means case 1 is applied and T(n) = Θ(f(n))
So, its final time complexity will be T(n) = Θ(n)

Example 1.37: T (n) = 3T (n/3) + n/2


Solution: In this example, a is equal to 3 and b is also equal to 3. So, logba = log33 = 1
Which means, f(n) = n = Θ(nlog33log0n). That means case 2 is applied and T(n) = Θ(nlog33log0+1n)
So, final time complexity will be T(n) = Θ(n.log(n))

Exercise
1 True or false
➢ 5 n + 10 n^2= O(n^2)
➢ n.log(n) + 4 n = O(n)
➢ log(n^2) + 4 log(log(n)) = O(log(n))
➢ 12.n1/2+ 3 = O(n^2)
➢ 3^n + 11.n^2 + n^20= O(2^n)
2 What is the best-case runtime complexity of searching an Array?
3 What is the average-case runtime complexity of searching an Array?
4 Given an array of positive numbers, you need to find the maximum sum under the constraint
that no two elements should be adjacent.

23
CHAPTER 2: APPROACH TO SOLVE PROBLEMS

Introduction
Theoretical knowledge of the algorithm is essential, but it is insufficient. When an interviewer asks the
interviewee to solve a problem, then the interviewee can use our five-step approach to solve problems.
If you master this technique, you will outperform the majority of applicants in interviews.

Five steps for solving algorithm design questions are:


1. Constraints
2. Ideas Generation
3. Complexities analysis
4. Coding
5. Testing

Constraints
Solving a technical question is not just about knowing the algorithms and designing a good software
system. The interviewer is interested in seeing your approach to any given problem. Often, people
make mistakes by failing to ask clarifying follow-up questions about a given problem. They make a lot
of assumptions at once and start working with them. Before you start solving a problem, you need to
collect a lot of missing information from your interviewer.

In this step, you will write down all the problem’s constraints. Never attempt to solve a problem that
isn’t completely defined. Interview questions are not like exam paper questions, where all the details
about a problem are well-defined. The interviewer wants you to ask questions and clarify the problem
during the interview.

Suppose, when the interviewer says to write an algorithm to sort numbers. You need to ask the
following clarifying question:
1. The first thing you need to know is what sort of data is being given. Assume the interviewer
gives you the answer Integer.
2. The size of the data is the second piece of information you need to know. If the input data is
100 integers or 1 billion integers, the algorithm is different.

The basic guideline for the Constraints for an array of numbers:


1. How many numbers of elements are there in an array?
2. What is the range of value in each element? What is the min and max value?
3. What is the kind of data in each element? Is it an integer or a floating point?
4. Does the array contain unique data or not?

24
Chapter 2: Approach to Solve Problems Constraints

The basic guideline for the Constraints for an array of strings:


1. How many total numbers of elements are there in the array?
2. What is the length of each string? What is the min and max length?
3. Does the array contain unique data or not?

The basic guideline for the Constraints for a Graph


1. How many nodes are there in the graph?
2. How many edges are there in the graph?
3. Is it a weighted graph? What is the range of weights?
4. Does the graph have directed edges, or undirected edges?
5. Does the graph have a loop?
6. Does the graph have a negative sum loop?
7. Are there any self-loops in the graph?

We will see in the graph chapter that depending upon the constraints the algorithm applied changes
and so is the complexity of the solution.

Idea Generation
We will cover a lot of theoretical knowledge in this book. It is impossible to cover all the questions, as
new questions are created every day. Therefore, you should know how to handle new problems. Even
if you know the solution to the problem asked by the interviewer, then also you need to have a
discussion with the interviewer and try to reach the solution. You need to analyse the problem also
because the interviewer may modify a question a bit, so the approach to solve it will vary.

How to solve an unseen problem? The solution to this problem is to learn a lot, and the more you
practice, the more you will be able to answer any unseen problem. When you’ve solved enough
problems, you’ll see a pattern in the questions and be able to answer unseen problems with ease.

Following is the strategy that you need to follow to solve an unknown problem:
1. Try to simplify the task at hand.
2. Try a few examples
3. Think of a suitable data structure.
4. Think about similar problems that you have already solved.

Try to simplify the task at hand


Let’s look into the following problem: Husbands and wives are standing at random in a line. Husbands
have been numbered, H1, H2, H3 and so on. Wives have been numbered, W1, W2, W3 and so on.
You need to arrange them so that H1 will stand first, followed by W1, then H2 followed by W2 and so
on.

25
Chapter 2: Approach to Solve Problems Idea Generation

At first look, it looks complicated, but it is a simple problem. Try to find a relation to the final position.
P(Hi) = i * 2 – 1 and P(Wi) = i * 2

We are leaving an exercise for you to do something like Insertion-Sort for the rest of the algorithm, and
you are done.

Try a few examples


In the above problem, if you try the above problem with an example of three husband-wife pairs then
you can get the same formula as shown in the previous section. Using more examples will also assist
in solving the problem.

Think of a suitable data-structure


It’s simple to figure out which data structure would be more appropriate for some specific problems.
Throughout this book, we will see a variety of data structures. We must determine which data structure
would best meet our requirements.

Problem 1: If we want to find the minimum and maximum of a given sequence.


Analysis: The heap is most likely the data structure we’re searching for.

Problem 2: We are given a stream of data, at any time, we can be asked to tell the median value of
the data, and maybe we can be asked to pop median data.
Analysis: We may visualise a tree, maybe a balanced tree with the median at the root. Wait a minute!
It’s not straightforward to ensure that the tree root is a median. We can’t get the median from a heap,
although it can give us the minimum or maximum. What if we use two heaps, a max-heap and a min-
heap? The max heap will hold the smaller values, while the min-heap will have the larger values.
Furthermore, we will keep track of how many elements are in the heaps. It would help if you came up
with the rest of the algorithm on your own.

For every unseen problem, think about the data structures you know, and maybe one of them or some
combination of them will solve your problem. Think about similar problems you have already solved.

Problem 3: Given head pointers of two linked lists that intersect at some point. Find the point of
intersection. However, in place of the end of the linked list to be a null pointer, there is a loop.
Analysis: You know how to find the intersection point of two intersecting linked lists, and you know
how to find if a linked list has a loop (three-pointer solution). Therefore, you can combine both solutions
to solve the problem at hand.

Complexities
Solving a problem is not just finding a correct solution. The solution should be fast and should have
reasonable memory requirements. In the previous chapters, you learned about big-O notation. You
should be able to perform Big-O analysis. If you believe the solution you have provided is not optimal
and there is a better solution, then try to figure it out.

26
Chapter 2: Approach to Solve Problems Complexities

Most interviewers expect that you should be able to find the Time and Space Complexity of the
algorithms. You should be able to calculate the Time and Space Complexity quickly. Whenever you
are solving any problem, you should find the complexity associated with it. From this, you would be
able to choose the best solutions. In some problems there are some trade-offs between Space and
Time Complexity, so you should know these trade-offs. Taking a little extra space will save you a lot of
time and make your algorithm much faster.

Coding
At this stage, you have already captured all the constraints of the problem, suggested a few solutions,
evaluated the complexities of those solutions and selected the one for final coding. Never begin coding
without first discussing with the interviewer about constraints, Idea generation and complexity.

We are used to writing code in an IDE like a Visual Studio. So several people struggle when asked to
write code on a whiteboard or some blank sheet. Therefore, you should do some practice coding on a
sheet of paper. You should think before coding because there is no back button on the sheet of paper.
Always try to write modular code. Small functions need to be created so that the code is clean and
managed. If there is a requirement for a swap function, just use this function and tell the interviewer
that you will write it later. Everybody knows that you can write a swap function.

Testing
You’re not done even if the code is written. It is essential to validate the code using a variety of small
test cases. It shows that you understand the importance of testing. It also gives the interviewer
confidence that you would not write a bug-ridden program. Once you have finished coding, you should
go over the code line-by-line for some small test cases. This is to ensure that the code is functioning as
intended.

Following are some test cases to check:


• Normal test cases: These are the positive test cases, which contain the most common
scenario, and the emphasis is on the functioning of the code’s base logic.
For example, if we are solving some problems for a linked list, then this test may contain, what
happens when a linked list with three or four nodes is given as input. Before declaring the code
complete, you should always think about these test cases.
• Edge cases: These are the test cases, which are used to test the boundaries of the code.
Edge cases can help to make your code more robust. We must add checks in the code to
handle edge cases.
For example, we can generate edge cases with the same linked list algorithm to see how the
code reacts when an empty list or only one node liar is passed.

Note: Always follow these five steps, never jump to coding before doing constraint analysis, idea
generation, and complexity analysis: At last, never miss the testing step.

27
Chapter 2: Approach to Solve Problems Example

Example
Let us suppose the interviewer asks you to give the best sorting algorithm.
Some interviewees will directly jump to Quick-Sort O(n.log(n)). Oops, mistake! You need to ask many
questions before beginning to solve this problem.

Let’s look at these questions one by one.


Question 1: What is the kind of data? Are they integers?
Answer: Yes, they are integers.

Question 2: How much data are we going to sort?


Answer: Maybe thousands.

Question 3: What exactly is this data about?


Answer: They store a person’s age

Question 4: What kind of data structure is used to hold this data?


Answer: Data are given in the form of a list

Question 5: Can we modify the given data structure? And many, many more questions…
Answer: No, you cannot modify the data structure provided

So, we are all set to use the given information to make a perfect solution. From the first answer, we will
know the type of data we are going to get is an integer. From the second answer, we can conclude that
data size is limited. It’s only in some thousands. From the next answer, we can conclude that it’s age-
related data. So we can assume that a person’s age will be between 1 to 150. And lastly, we know that
data is in the form of a list and cannot change it.

To summarise, we can use bucket sort to sort the data. Since the range is only 1-150, we only need an
integer list of 150 elements. We don’t have to think about data overflow because the data is in
thousands, and we get the solution in linear time.

Summary
At this point, you know the process of handling unseen problems very well. In the coming chapter, we
will be looking into various data structures and the problems they solve. It may be possible that the
user cannot understand some portion of this chapter as knowledge of the rest of the book is needed,
so they can reread this chapter after reading the rest of the data structures portion. A huge number of
problems are solved in this book. However, it is recommended that you first try to solve them yourself
and then look for the solution. Always think about the complexity of the problem. In the interview,
interaction is the key to get the problem described completely and discuss your approach with the
interviewer.

28
CHAPTER 3: ABSTRACT DATA TYPE

Abstract Data Type (ADT)


An abstract data type (ADT) is a logical description of the data and its operations. An ADT is known as
a user’s point of view of data. An ADT is concerned about the possible values of data and interfaces
exposed by it. An ADT is not concerned about the actual implementation of the data structure.

For example, a user wants to store some integers and find their mean value. ADT for this data
structure would have two functions, one for adding integers and another to get the mean value. ADT
for this data structure does not talk about how exactly it will be implemented.

Data-Structure
Data structures are concrete representations of data that are defined from the perspective of a
programmer. The data structure represents how data can be stored in memory. Each data structure
has its own set of advantages and disadvantages. Depending on the type of problem, we choose the
most appropriate data structure.

For example, according to the question, we can store data in arrays, stack, queue, linked list, and
many more.

Note: - In this chapter, we will be studying various data structures and their API. So that the user can
use them without knowing their internal implementation.

29
Chapter 3: Abstract Data Type Kotlin Collection Framework

Kotlin Collection Framework


Kotlin programming language provides a Kotlin Collection Framework, which is a set of high quality,
high-performance & reusable data structures and algorithms.

The following advantages of using a Kotlin collection framework:


1. Programmers do not have to implement basic data structures and algorithms repeatedly. Thereby,
it prevents the reinvention of the wheel. Thus, the programmer can devote more effort to business
logic
2. The Kotlin Collection Framework code is a well-tested, high quality, high-performance code. Using
them increases the quality of the programs.
3. Development cost is reduced as basic data structures and algorithms are implemented in the
Collections framework.
4. Easy to review and understand programs written by other developers, as most Kotlin developers
use the Collection framework. In addition, the collection framework is well documented.

Array
Arrays are the simplest data structures that store items of the same data type.

Array ADT Operations


Below is the API of an array:
1. Adds an element at the kth position. Value can be stored in an array at the kth position in O(1)
constant time. We just need to store value at arr[k].
2. Reading the value stored at the kth position. Accessing the value stored at some index in the array
is also O(1) constant time. We just need to read the value stored at arr[k].
3. Substitution of value stored in kth position with a new value.
4. Time complexity: O(1) constant time.

Example 3.1:
fun main() {
val arr = IntArray(10)
for (i in 0..9) {
arr[i] = i
}
for (i in arr){
print(" " + i)
}
}

Output:
0 1 2 3 4 5 6 7 8 9

30
Chapter 3: Abstract Data Type Array

Application of Arrays
Applications of Arrays are:
1. Storing data in tabular format.
2. Used in the creation of Matrices. Online ticket booking system in which seats are represented
in the form of Matrices.
3. Used in the creation of various higher level data structures like Stacks, Queues, Heaps,
HashTables etc.

ArrayList implementation in Kotlin Collections


ArrayList<E> in Kotlin Collections is a data structure that implements the List<E> interface, which
means that it can have duplicate elements in it. ArrayList is an implementation of a dynamic array that
can grow or shrink as needed. (Internally the array is used when it is full, a bigger array is allocated
and the old array values are copied to it.)

Example 3.2:
fun main() {
val al = ArrayList<Int>()
al.add(1) // add 1 to the end of the list
al.add(2) // add 2 to the end of the list

println("Array : $al")
println("Array Size : " + al.size)
println("Array IsEmpty : " + al.isEmpty())

al.removeAt(al.size - 1) // last element of array is removed.


println("Array : $al")

al.removeLast() // last element of array is removed.


println("Array IsEmpty : " + al.isEmpty())
}

Output:
Array : [1, 2]
Array Size : 2
Array IsEmpty : false
Array : [1]
Array IsEmpty : true

Linked List

Linked list is a dynamic data structure and memory is allocated at run time. The concept of linked list is
not to store data contiguously. Nodes of linked list contain links that point to the next elements in the
list.

31
Chapter 3: Abstract Data Type Linked List

Performance-wise, linked lists are slower than arrays because there is no direct access to linked list
elements. A linked list is a useful data structure when we do not know the number of elements to be
stored ahead of time. There are many types of linked lists: linear, circular, doubly, doubly circular etc.

Linked list ADT Operations


Below is the API of the Linked list:
1. Insert(k) will insert an element at the start of the list. Just create a new element and move
pointers. So that this new element becomes the first element of the list. This operation will take
O(1) constant time.
2. Delete() will delete an element at the start of the list. We just need to move one pointer. This
operation will also take O(1) constant time.
3. Print() will display all the elements of the list. Start with the first element and then follow the
pointers. This operation will take O(N) time.
4. Find(k) will find the position of the element with the value k. Start with the first element and
follow the pointer until we get the value we are looking for or reach the end of the list. This
operation will take O(N) time.
5. IsEmpty() will check if the number of elements in the list is zero. Just check the head pointer of
the list, if it is Null then the list is empty, otherwise not empty. This operation will take O(1) time.

LinkedList implementation in Java Collections


LinkedList<E> in by Java Collections is a data structure that also implements the List<E> interface.

Example 3.3:
import java.util.LinkedList

fun main() {
val ll = LinkedList<Int>()
ll.addFirst(2) // 2 is added to the list.
ll.addLast(10) // 10 is added to last of the list.
ll.addFirst(1) // 1 is added to first of the list.
ll.addLast(11) // 11 is added to last of the list.

println("Contents of Linked List: $ll")

ll.removeFirst()
ll.removeLast()
println("Contents of Linked List: $ll")
}

Output:
Contents of Linked List: [1, 2, 10, 11]
Contents of Linked List: [2, 10]

32
Chapter 3: Abstract Data Type Linked List

Analysis: Values are added to the back and front of the linked list. Then the values stored in the linked
list are printed. Then values are removed from the linked list from the front and back and again linked
list content is printed to the screen.

Stack
Stack is a data structure that follows the Last-In-First-Out (LIFO) principle. This means that the
element that is added last will be removed first.

Stack ADT Operations


Below is the API of the Stack:
1. Push(k) will add value k on the top of the stack.
2. Pop() will remove an element from the top of the stack and return its value.
3. Top() will return the value of the element on top of the stack.
4. Size() will return the number of elements in the stack.
5. IsEmpty() will tell us whether the stack is empty or not. It returns 1 if the stack is empty, else
return 0.

Note: All the above stack operations are implemented in O(1) time Complexity.

Stack implementation in Java Collection


Stack is implemented by calling push and pop methods of the Stack <T> class.

Example 3.4:
import java.util.Stack

fun main() {
val stack = Stack<Int>()
stack.push(1)
stack.push(2)
stack.push(3)
println("Stack : $stack")
println("Stack size : " + stack.size)
println("Stack pop : " + stack.pop())
println("Stack top : " + stack.peek())
println("Stack isEmpty : " + stack.isEmpty())
}

33
Chapter 3: Abstract Data Type Stack

Output:
Stack : [1, 2, 3]
Stack size : 3
Stack pop : 3
Stack top : 2
Stack isEmpty : false

Stack is also implemented by calling push and pop methods of the ArrayDeque<T> class.

Queue
A queue is a data structure that follows the First-In-First-Out (FIFO) principle. The first element added
to the queue first would be the first to be removed, and vice versa.

Queue ADT Operations:


Below is the API of the Queue:
1. Add(K) will add element k at the end of the queue.
2. Remove() will remove the first element at the front of the queue and return its value.
3. Front() will return the value of the element at the front of the queue.
4. Size() will return the number of elements in the queue.
5. IsEmpty() will check whether the queue is empty or not. If it is empty then return 1, else return
0.

Note: All the above queue operations are implemented in O(1) Time Complexity.

Queue implementation in Kotlin Collection


ArrayDeque<T> is the class implementation of a doubly ended queue. If we use add(), removeFirst()
and first() it will behave as a queue. (Moreover, if we use add(), removeLast() and last() it behaves as
a stack.)

Example 3.5:
fun main() {
val que = ArrayDeque<Int>()
que.add(1)
que.add(2)
que.add(3)

34
Chapter 3: Abstract Data Type Queue

println("Queue : $que")
println("Queue size : " + que.size)
println("Queue peek : " + que.first())
println("Queue remove : " + que.removeFirst())
println("Queue isEmpty : " + que.isEmpty())
}

Output:
Queue : [1, 2, 3]
Queue size : 3
Queue peek : 1
Queue remove : 1
Queue isEmpty : false

Analysis: Values are added to the queue and printed to the screen. Since the queue is first-in-first-out,
the values that are added first are the first to come out of the queue.

Tree
A tree is a data structure that is organised in a hierarchy. Each element of the tree data structure is
called a node. The top node of the tree is called the root node. Each node in a tree, except the root,
has a parent node and zero or more child nodes. In the case of the last level of nodes, they have no
child. They are called leaf nodes. Where you need to store hierarchical records, the tree is the most
appropriate data structure to use.

A binary tree is a type of tree in which each node has at most two children ( 0, 1, or 2 ) which are
referred to as left child and right child.

Binary Search Tree (BST)


A binary search tree (BST) is a binary tree in which nodes are ordered in the following way:
1. The key in the left subtree is less than or equal to the key in its parent node.
2. The key in the right subtree is greater than the key in its parent node.

35
Chapter 3: Abstract Data Type Binary Search Tree (BST)

Binary Search Tree ADT Operations


Below is the API of the BST:
1. Insert(k) will insert an element k into the tree.
2. Delete(k) will delete an element k from the tree.
3. Search(k) will search a value k into the tree if it is present or not.
4. FindMax() will find the maximum value stored in the tree.
5. FindMin() will find the minimum value stored in the tree.

The average time complexity of all the above operations on a binary search tree is O(log(n)), the case
when the tree is balanced. The worst-case time complexity is O(n) when the tree is not balanced.

TreeSet implementation in Java Collections


TreeSet<> is a class that implements the Set<> interface, which means that it stores unique elements.
TreeSet<> is implemented using a red-black balanced binary search tree in Java Collections. Since
TreeSet<> is implemented using a binary search tree, its elements are stored in sequential order.

Example 3.6:
import java.util.TreeSet

fun main() {
// Create a tree set.
val ts = TreeSet<String>()
// Add elements to the tree set.
ts.add("Banana")
ts.add("Apple")
ts.add("Mango")
println(ts)
println("Apple present : " + ts.contains("Apple"))
println("Grapes present : " + ts.contains("Grapes"))
ts.remove("Apple")
println("Apple present : " + ts.contains("Apple"))
}

Output:
[Apple, Banana, Mango]
Apple present : true
Grapes present : false
Apple present : false

Note: TreeSet is implemented using a binary search tree so add, remove, and contain methods have
logarithmic time complexity O(log(n)), where n is the number of elements in the set.

TreeMap implementation in Kotlin Collection


A Map<> is an interface that maps keys to values. A TreeMap<> is an implementation of Map<> and is
implemented using a red-black balanced binary tree so the key-value pairs are stored in sorted order.

36
Another random document with
no related content on Scribd:
THE ARCTIC WORLD.
CHAPTER I.
THE NORTH POLE—THRESHOLD OF THE UNKNOWN WORLD—THE
CIRCUMPOLAR REGIONS—THE FLORA OF THE NORTH—LIFE IN THE
POLAR WORLD—THE NORTH-WEST AND NORTH-EAST PASSAGES.

s the reader knows, the Poles are the two extremities of the
axis round which the Earth revolves. It is to the North Pole,
and the regions surrounding it, that the following pages will be
devoted.
The inhabitants of Western Europe, and more particularly those
of the British Isles, have a peculiar interest in the North Polar
Regions. Deriving their wealth and importance from their commercial
enterprise, and that commercial enterprise leading their ships and
seamen into the furthest seas, they have necessarily a vital concern
in the discovery of the shortest possible route from that side of the
Earth which they inhabit to the other, or eastern side; and this, more
particularly, because the East is rich in natural productions which are
of high value to the peoples of the West.
Now a glance at the map will show the reader that the traders of
Western Europe—the British, the French, the Dutch, the
Scandinavians—are situated on the northern shores of the Atlantic
Ocean, and that, to reach the Pacific Ocean or the Indian, only two
routes are at present open. For instance, they may cross the Atlantic
to the American coast, and, keeping southward, strike through
Magellan’s stormy Strait or round the bleak promontory of Cape
Horn into the Pacific, and then, over some thousands of miles of
water, proceed to Australia or Hindustan or China; or they may keep
along the African coast to the Cape of Good Hope, its southernmost
point, and so stretch across the warm Tropical seas to India and the
Eastern Archipelago. A third, an artificial route, has indeed of late
years been opened up; and ships, entering the Mediterranean, may
pass through the Suez Canal into the Red Sea. But this last-named
route is unsuitable for sailing-ships, and all three routes are laborious
and slow. How greatly the distance would be shortened were it
possible to navigate the Northern Seas, and, keeping along the north
coast of the American continent, to descend Behring’s Strait into the
Pacific! In other words, were that North-West Passage practicable,
which, for three centuries, our geographers and explorers so
assiduously and courageously toiled to discover! But a still shorter
route would be opened up, if we could follow a line drawn from the
British Islands straight across the North Pole to Behring’s Sea and
the Aleutian Archipelago. This line would not exceed 5000 miles in
length, and would bring Japan, China, and India within a very short
voyage from Great Britain. We should be able to reach Japan in
three or four weeks, to the obvious advantage of our extensive
commerce.
Hitherto, however, all efforts to follow out this route, and to throw
open this great ocean-highway between Europe and Asia, have
failed. Man has been baffled by Nature; by ice, and frost, and winds,
and climatic influences. With heroic perseverance he has sought to
gain the open sea which, it is believed, surrounds the Pole, but a
barrier of ice has invariably arrested his progress. His researches
have carried him within about 500 miles of the coveted point; but he
is as yet unable to move a step beyond this furthest limit of
geographical discovery. Immediately around the North Pole, within a
radius of eight to ten degrees or more, according to locality, still lies
an Unknown Region, on the threshold of which Science stands
expectant, eagerly looking forward to the day when human skill and
human courage shall penetrate its solitudes and reveal its secrets.
This Unknown Region comprises an area of 2,500,000 square
miles; an immense portion of the terrestrial surface to be shut out
from the knowledge of Civilized Man. Its further exploration, if
practicable, cannot but be rich in valuable results. Not only would it
furnish the shortest route from the West to the East, from
progressive Europe to conservative Asia, from the Atlantic to the
Pacific, but it could not fail to add in a very important degree to our
stores of scientific information. Sir Edward Sabine is surely right
when he says, that it is the greatest geographical achievement which
can be attempted, and that it will be the crowning enterprise of those
Arctic researches in which England has hitherto had the pre-
eminence.
We may briefly indicate to the reader some of the advantages
which might be expected from exploration in the Unknown Region. It
would unquestionably advance the science of hydrography, and lead
to a solution of some of the more difficult problems connected with
the Equatorial and Polar ocean-currents, those great movements of
the waters of which, as yet, we know so little.
A series of pendulum observations, it is said, at and near the
North Pole, would be of essential service to the science of geology.
We are unable, at present, for want of sufficient data, to form a
mathematical theory of the physical condition of the Earth, and to
ascertain its exact configuration. No pendulum observations have
been taken nearer than 600 or 620 miles to the North Pole.
Again: what precious information respecting the strange and
wonderful phenomena of magnetism and atmospheric electricity
would certainly be acquired! How much we have yet to learn in
reference to the Aurora, which can be learned only in high latitudes,
and at or near the point which apparently represents a magnetic
focus or centre!
It has also been pointed out by Mr. Markham that the climate of
Europe is largely affected by the atmospheric conditions of the Polar
area, in which the development of extremely low temperatures
necessarily leads to corresponding extreme changes of pressure,
and other atmospheric disturbances, whose influence extends far
into the Temperate Zone. For the satisfactory appreciation of these
phenomena, says Mr. Markham, a precise knowledge is required of
the distribution of land and water within the Polar Region; and any
addition to our knowledge of its unknown area, accompanied by
suitable observations of its meteorology, cannot fail to afford
improved means of understanding the meteorology of our own
country, and of the Earth generally.
There can be no doubt, too, that geology would profit, if we could
push our researches nearer to the Pole, and force our way through
the great barrier of the Polar ice. It is highly desirable, too, that we
should know more of that interesting class of animals, the Mollusca,
both terrestrial and aquatic, fresh-water and salt-water. Again: what a
wide field of inquiry is opened up by the Polar glaciers; their extent,
their elevation, their range, and the effects produced by the slow but
continuous motion of those huge ice-rivers over the surface of the
country. And the botanist has a right to calculate upon the discovery
of many precious forms of vegetable life in the Unknown Region.
The Arctic flora is by no means abundant, but it is peculiarly
interesting. In Greenland, besides numerous mosses, lichens, algæ,
and the like, flourish three hundred kinds of flowering plants, all of
which are natives of the Scandinavian peninsula; and Dr. Joseph
Hooker remarks that they exhibit scarcely any admixture of American
types, though these are found on the opposite coast of Labrador. It
would seem probable that in the warm period which preceded the
Glacial Age, the Scandinavian flora spread over the entire area of
the Polar Regions; but that during the Age of Ice it was gradually
driven within its present limits, only the hardier types surviving the
blight of the long lingering winter.
And what would be the gain to the zoologist? Why, it is a well-
known fact that life abounds in the Arctic waters, and especially
those minute organisms which play so important a part in the
formation of sedimentary deposits, and help to build up the terrestrial
crust. We have much to learn, moreover, of the habits and habitats
of the fish, the echinoderms, the molluscs, the corals, the sponges of
the extreme Northern Seas.
There are questions connected with the migrations of birds which
can be elucidated only by an exploration of the Unknown Region.
Multitudes which annually visit our shores in the winter and spring,
return in summer to the far North. This is their regular custom, and
obviously would not have become a custom unless it had been found
beneficial. Therefore we may assume that in the zone they frequent
they find some water which is not always frozen; some land on which
they can rest their weary feet; and an adequate supply of nourishing
food.
From Professor Newton we adopt, in connection with this
consideration, a brief account of the movements of one class of
migratory birds,—the Knots.[1]
The knot, or sandpiper, is something half-way between a snipe
and a plover. It is a very active and graceful bird, with rather long
legs, moderately long wings, and a very short tail. It swims
admirably, but is not often seen in the water; preferring to assemble
with its fellows on the sandy sea-shores, where it gropes in the sand
for food, or fishes in the rock-pools and shallow waters for the small
crustaceans. It is known both as the red and the ash-coloured
sandpiper, because it changes the colour of its plumage according to
the season of the year; a bright red in summer, a sober ashen-gray
in winter. Now, in the spring the knot seeks our island in immense
flocks, and after remaining on the coasts for about a fortnight, can be
traced proceeding gradually northwards, until it finally takes leave of
us. It has been noticed in Iceland and Greenland, but not to stay; the
summer there would be too rigorous for its liking, and it goes further
and further north. Whither? Where does it build its nest, and hatch its
young? We lose all trace of it for some weeks: what becomes of it?
Towards the end of summer back it comes to us in larger flocks
than before, and both old birds and young birds remain upon our
coasts until November, or, in mild seasons, even later. Then it wings
its flight to the south, and luxuriates in blue skies and balmy airs until
the following spring, when it resumes the order of its migrations.
Commenting upon these facts, Professor Newton infers that the
lands visited by the knot in the middle of summer are less sterile
than Iceland or Greenland; for certainly it would not pass over these
countries, which are known to be the breeding-places for swarms of
water-birds, to resort to regions not so well provided with supplies of
food. The food, however, chiefly depends on the climate. Wherefore
we conclude that beyond the northern tracts already explored lies a
region enjoying in summer a climate more genial than they possess.
Do any races of men with which we are now unacquainted inhabit
the Unknown Region? Mr. Markham observes that although scarcely
one-half of the Arctic world has been explored, yet numerous traces
of former inhabitants have been found in wastes which are at
present abandoned to the silence and solitude. Man would seem to
migrate as well as the inferior animals, and it is possible that tribes
may be dwelling in the mysterious inner zone between the Pole and
the known Polar countries.
The extreme points reached by our explorers on the ice-bound
Greenland coast are in about 82° on the west, and 76° on the east
side; these two points lying about six hundred miles apart. As man
has dwelt at both these points, and as they are separated from the
settlements further south by a dreary, desolate, uninhabitable
interval, it is not an extravagant conjecture that the unknown land to
the north has been or is inhabited. In 1818 a small tribe was
discovered on the bleak Greenland coast between 76° and 79° N.;
their southward range being bounded by the glaciers of Melville Bay,
and their northward by the colossal mass of the Humboldt Glacier,
while inland their way is barred by the Sernik-sook, a great glacier of
the interior. These so-called Arctic Highlanders number about one
hundred and forty souls, and their existence “depends on open pools
and lanes of water throughout the winter, which attract animal life.”
Wherever such conditions as these are found, man may be found.
We know that there are or have been inhabitants north of the
Humboldt Glacier, on the very threshold of the Unknown Region; for
Dr. Kane’s expedition discovered the runner of a sledge made of
bone lying on the beach immediately to the north of it. The Arctic
Highlanders, moreover, cherish a tradition that herds of musk-oxen
frequent an island situated far away to the north in an iceless sea.
Traces of these animals were found by Captain Hall’s expedition, in
1871–72, as far north as 81° 30′; and similar indications have been
noted on the eastern side of Greenland. In 1823, Captain Clavering
found twelve natives at Cape Borlase Warren, in lat. 79° N.; but
when Captain Koldewey, of the German expedition, wintered in the
same neighbourhood, in 1869, they had disappeared, though there
were traces of their occupancy, and ample means of subsistence.
Yet they cannot have gone southward, owing to insuperable natural
obstacles; they must have moved towards the North Pole.
We have thus indicated some of the results which may be
anticipated from further researches in the Unknown Region. It is not
to be forgotten, however, that “the unexpected always happens,” and
it is impossible to calculate definitely the consequences which may
ensue from a more extensive investigation. “Columbus,” it has been
justly said, “found very few to sympathize with him, or perceive the
utility of the effort on his part to go out into the unknown waste of
waters beyond the Strait of Gibraltar, in search of a new country.
Who can, at this time, estimate the advantages which have followed
upon that adventure? If now it should be possible to reach the Pole,
and to make accurate observations at that point, from the relation
which the Earth bears to the sun and to the whole stellar universe,
the most useful results are very likely to follow, in a more thorough
knowledge of our globe.”
The reader has now before him the particulars which will enable
him to form an idea of the extent and character of the undiscovered
region of the Pole. Roughly speaking, it is bounded by the 80th
parallel of latitude on the European side, except at a few points
where our gallant explorers have succeeded in crossing the
threshold; on the Asiatic side it descends as low as 75°; and to the
west of Behring Strait as low as 72°. Thus, it varies from 500 or 600
to 1400 or 1500 miles across. Below these parallels, and bounded
by the Arctic Circle, or, in some places, by the 60th parallel, extends
a vast belt of land and water which is generally known as the Arctic
or Circumpolar Regions. These have been more or less thoroughly
explored; and it is to a description of their principal features, their
forms of animal and vegetable life, and their natural phenomena, that
we propose to devote the present volume.
A DESERT OF ICE IN THE ARCTIC REGION.
It is important to remember that the northern shores of Europe,
Asia, and America are skirted by the parallel of 70°, and that the belt
between the 70th and 80th parallels, having been partially explored
by the seamen and travellers of various nations, intervenes as a kind
of neutral ground between the known and the unknown. We may,
indeed, formulate our statement thus; from the Pole to the 80th
degree stretches the unknown; from the 80th to the 70th, the partially
known; while, south of the 70th, we traverse the lands and seas
which human enterprise has completely conquered.
The Circumpolar Zone includes the northernmost portions of the
three great continents, Europe, Asia, and America; and by sea it has
three approaches or gateways: one, through the Northern Ocean,
between Norway and Greenland; another, through Davis Strait,—
both these being from the Atlantic; and a third, through Behring
Strait,—the entrance from the Pacific.
It will be seen that the Circumpolar Regions, as they are now
understood, and as we shall describe them in the following pages,
extend to the south of that imaginary line drawn by geographers
round the North Pole, at a distance from it equal to the obliquity of
the ecliptic, or 23° 30’. Within this circle, however, there is a period of
the year when the sun does not set; while there is another when he
is never seen, when a settled gloom spreads over the face of nature,
—this period being longer or shorter at any given point according as
that point is nearer to or further from the Pole.
But as animal and vegetable life are largely affected by climate, it
may be justly said that wherever an Arctic climate prevails there we
shall find an Arctic or Polar region; and, hence, many countries
below even the 60th parallel, such as Kamtschatka, Labrador, and
South Greenland, fall within the Circumpolar boundary.
The waters surrounding the North Pole bear the general
designation of the Arctic Ocean. But here again it is almost
impossible to particularize any uniform limit southward. It joins the
Pacific at Behring Strait in about lat. 66° N., and consequently in this
quarter extends fully half a degree beyond the Arctic Circle. At
Scoresby Sound, as at North Cape, where it meets the Atlantic, it is
intersected by the parallel of 71°, and consequently falls short of the
Arctic Circle by about 4° 30’.
In the Old World, the Polar Ocean, if we include its gulfs,
extends, in the White Sea, fully two degrees beyond the Arctic
Circle; while at Cape Severo, the northernmost point of Asia, in lat.
78° 25’ N., it is 11° 55’ distance from it. Finally, in the New World it is
everywhere confined within the Circle; as much as 5° at Point
Barrow, about 7° 30’ at Barrow Strait, and about 3° at the Hecla and
Fury Strait.
We may add that, so far as temperature is concerned, the great
gulfs known, in memory of their discoverers, as Davis Strait, Baffin
Bay, and Hudson Bay, are portions of the Arctic Ocean.
Of the more southerly area of this great ocean, the only section
which has been adequately explored to a distance from the
continent, and in the direction of the Pole, is that which washes the
north-east of America. Here we meet, under the collective name of
the Polar Archipelago, with the following islands:—Banks Land,
Wollaston Land, Prince Albert Land, Victoria Land, Prince Patrick
Island, Princess Royal Islands, Melville Island, Cornwallis Island,
North Devon, Beechey Island, Grinnell Land, and North Lincoln.
Further to the east lie Spitzbergen, Jan Mayen Island, Novaia
Zemlaia, New Siberia, and the Liakhov Islands. The chief straits and
inlets are Lancaster Sound, Barrow Strait, Smith Sound, Regent
Inlet, Hecla and Fury Strait, Wellington Channel, and Cumberland
Sound; while further westward are Belcher Channel, Melville Sound,
M’Clintock Channel, Banks Strait, and Prince of Wales Strait.
The Arctic Lands comprehend two well-defined sections, or
zones; that of the forests, and the treeless wastes.
To the latter belong the islands within the Arctic Circle, and also a
considerable tract of the northern continents, forming the “barrens” of
North America, and the “tundras” and “steppes” of European Russia
and Siberia.
The treeless character of this vast area of wilderness is owing to
the bleak sea-winds which drive, without let or hindrance, across the
islands and level shores of the Polar Ocean, compelling even the
most vigorous plant to bend before them and creep along the
ground.
Drearier scenes are nowhere presented than these stony
tundras, or their boundless swamps. Almost the only vegetation are
a few gray lichens, a few dull blackish-looking mosses; the stunted
flowers or crawling grasses that here and there occur do not relieve
the uniform desolation,—they serve simply to enhance its gloomy
character. In summer, indeed, the tundras are full of life; for the
spawning instinct of the salmon and the sturgeon impels them to
enter their rivers and seek the quiet recesses of their mysterious
lakes. The reindeer assemble in numerous herds to feed on the
herbage warmed into temporary vitality by the upward-slanting sun;
the whirr of countless wings announces the coming of the migratory
birds to breed, and feed their young, on the river-banks and the level
shores; and in their trail arrive the eagle and the hawk, intent on
prey.
But with the first days of September a change passes over the
scene. Animal life hastens to the more genial south; the birds
abandon the frozen wastes; the reindeer retires to the shelter of the
forests; the fish desert the ice-bound streams; and a terrible silence
reigns in the desolate wilderness, broken only by the harsh yelp of a
fox or the melancholy hooting of a snow-owl. For some eight or nine
months a deep shroud or pall of snow lies on the whitened plains. No
cheerful sunbeams irradiate it with a rosy glow; the sky is dull and
dark; and it seems as if Nature had been abandoned to eternal
Night.
But blank and dreary as the limitless expanse of snow appears, it
is the security of man in these far northern regions. It affords the
necessary protection to the scanty vegetable life against the rigour of
the long winter season. In Rensselaer Bay, Dr. Kane found, when the
surface temperature had sunk to -30°, a temperature at two feet
deep of -8°, at four feet deep of +2°, and at eight feet deep of +26°,
or no more than 6° below freezing-point. Hence, underneath their
thick frozen pall, the Arctic grasses and lichens maintain a struggling
existence, and are able to maintain it until thoroughly resuscitated by
the summer sun. It is owing to this wise and beneficent provision
that, in the highest latitudes, the explorer discovers some feeble
forms of vegetation. Thus, as Hartwig reminds us, Morton gathered a
crucifer at Cape Constitution, in lat. 80° 45’ N.; and Dr. Kane, on the
banks of the Minturn River, in lat. 78° 52’, met with a flower-growth
which, though fully Arctic in its type, was gaily and richly coloured—
including the purple lychnis, the starry chickweed, and the hesperis,
among the festuca and other tufted grasses.
In the tundras, the most abundant vegetable forms, next to the
lichens and mosses, are the grasses, the crucifers, the saxifrages,
the caryophyles, and the compositæ. These grow fewer and fewer
as we move towards the north, but the number of individual plants
does not decrease. Where the soil is fairly dry, we shall find an
extensive growth of lichens; in moister grounds, these are
intermingled with the well-known Iceland moss. Lichens are
everywhere, except in the sparse tracts of meadow-land lying at the
foot of sheltering hills, or in those alluvial inundated hollows which
are thickly planted with “whispering reeds” and dwarf willows.
It is not easy to trace exactly the boundary between the tundras
and the forest zone. The former descend to the south, and the latter
advances to the north, according to the climatic influences which
prevail; following the isothermic lines of uniform temperature, and not
the mathematical limits of the geographical parallels of latitude.
Where the ground undulates, and hilly ridges break the fury of the icy
blasts, the forests encroach on the stony treeless region; but the
desolate plains strike into the wooded zone in places where the
ocean-winds range with unchecked sway.

THE SWAMPS OF THE OBI.


The southernmost limit of the “barrens” is found in Labrador,
where they descend to lat. 57°; nor is this to be wondered at, when
we remember the peculiar position of that gloomy peninsula, with icy
seas washing it on three sides, and cold winds sweeping over it from
the north. On the opposite coasts of Hudson Bay they do not strike
lower than 60°; and they continue to rise as we proceed westward,
until in the Mackenzie Valley we find the tall forest growth reaching
as far north as 68° or even 70°. Thence they recede gradually, until,
on the bleak shore of Behring Sea, they do not rise higher than 65°.
Crossing into the eastern continent, we find them beginning, in the
land of the Tuski (or Tchuktche), in 63°, and from thence encroaching
gradually upon the tundras until, at the Lena, they reach as high as
71°. From the Lena to the Obi the tundras gain upon the forests, and
in the Obi Valley descend below the Arctic Circle; but from the Obi to
the Scandinavian coast the forests gain upon the tundras,
terminating, after many variations, in lat. 70°.

IN THE FOREST ZONE OF THE NORTH.

The result to which this rapid survey brings us is, that the
“tundras” or “barrens” of Europe, Asia, and America occupy an area
larger than the whole of Europe. The Siberian wilderness is more
extensive than the African Sahara or the South American Pampas.
But of still vaster area are the Arctic forest regions, which stretch in
an “almost continuous belt” through three quarters of the world, with
a breadth of from 15° to 20°—that is, of 1000 to 1400 miles. And it is
a peculiarity of these Circumpolar woods, that they are almost wholly
composed of conifers, and that frequently a wide space of ground is
covered for leagues upon leagues with a single kind of fir or pine.
“This is the forest primeval. The murmuring pines and the hemlocks,
Blended with moss, and in garments green, indistinct in the twilight,
Stand like Druids of eld, with voices sad and prophetic,
Stand like harpers hoar, with beards that rest on their bosoms.”

The American species, however, differ from the Asiatic or


European. While in the Hudson Bay territories grow the white and
black spruce,[2] the Canadian larch,[3] and the gray pine[4]; in
Scandinavia and Siberia, the Siberian fir and larch,[5] the Picea
olovata, and the Pinus umbra flourish. But both in the Old World and
the New the birch advances beyond the fir and pine, and on the
banks of the rivers and the shores of the lakes dwarf willows form
immense and almost impenetrable thickets. The Arctic forests also
include various kinds of ash, elder, and the service tree; and though
orchard trees are wholly wanting, both man and beast find a great
boon in the bilberries, cranberries, bog-berries, and the like, which
grow plentifully in many localities.
The area of the Arctic flora comprises Greenland, in the western
hemisphere, and extends considerably to the south of the Arctic
Circle, especially on the coasts, where it reaches the parallel of 60°
N. lat., and even overpasses it.
In Greenland the vegetation is more truly of an Arctic character
than even in Iceland. The valleys are covered with marsh-plants and
dingy mosses; the gloomy rocks are encrusted with lichens; while
the grasses on the meadow-lands that border the fiords and inlets
are nearly four times less varied than those of Iceland.
The flora of Iceland is approximative to that of Great Britain; yet
only one in every four of British plants is included in it. The total
number of species may be computed at eight hundred and seventy,
of which more than half blossom; this proportion is greater than
prevails in Scotland, but then only thirty-two are of woody texture.
They are scattered about in groups according as they prefer a
marshy, volcanic, dry, or marine soil. Many bloom in the immediate
vicinity of the hot springs; some not far from the brink of the basin of
the Great Geyser, where every other plant is petrified; and several
species of confervæ flourish in a spring the waters of which are hot
enough, it is said, to boil an egg.
From the nature of the Arctic forests, the reader will be prepared
to learn that they are not inhabited, like those of the Tropics, by
swarms of animals; or made musical by the songs of birds, like our
European woods. Even the echoes are silent, except when the
hoarse wind bears to them the peculiar cry of the reindeer, the howl
of the wolf, or the sharp scream of some bird of prey. Insect life,
however, is active and abundant; and our Arctic travellers have
suffered greatly from the legions of gnats which haunt their swampy
recesses.

Passing from the forest region into the treeless wastes, we may
glance once again at their strikingly impressive features. North of the
62nd parallel no corn can ripen, on account of the fatal power of the
winds which pour down from the Arctic Ocean. As we advance to the
northward, a wide-spread area of desolation stretches before us: salt
steppes, stony plains, boundless swamps, and lakes of salt and
fresh water. So terrible is the cold that the spongy soil is perpetually
frozen to the depth of some hundred feet below the surface; and the
surface itself, though not thawed until the end of June, is again ice-
bound by the middle of September. One of the most graphic
sketches with which we are acquainted of the extreme Siberian
desert is furnished by Admiral von Wrangel, who travelled during the
winter from the mouth of the Kolyma to Behring Strait.
Here, he says, endless snows and ice-crusted rocks bound the
horizon; Nature lies shrouded in all but perpetual winter; life is a
constant conflict with privation and with the terrors of cold and
hunger; the grave of Nature, containing only the bones of another
world. The people, and even the snow, throw off a continual vapour;
and this evaporation is instantly changed into millions of needles of
ice, which make a noise in the air like the sound of torn satin or the
rustle of thick silk. The reindeer take to the forest, or crowd together
for heat; and the raven alone, the dark bird of winter, still smites the
frosty air with heavy laborious wing, leaving behind him a long trail of
thin vapour to mark the course of his solitary flight. The trunks of the
thickest trees are rent with a loud clang, masses of rock are torn
from their sites, the ground in the valleys is split into a myriad
fissures, from which the waters that are underneath bubble up,
throwing off a cloud of smoke, and immediately congealing into ice.
The atmosphere grows dense; the glistening stars are dimmed. The
dogs outside the huts of the Siberians burrow in the snow, and their
howling, at intervals of six or eight hours, interrupts the general
silence of winter.
The abundance of fur-bearing animals in the less rigorous parts
of the tundras has induced the hardy Russians to colonize and build
towns on these confines of the Frozen World. Yakutsk, on the river
Lena, in 62° 1′ 30″ N., may be regarded, perhaps, as the coldest
town on the Earth. The ground is perpetually frozen to the depth of
more than 400 feet, of which three feet only are thawed in summer,
when Fahrenheit’s thermometer frequently marks 77° in the shade.
Yet in winter the rigour of the climate is so extreme that mercury is
constantly frozen for two and occasionally even for three months.

From the data set forth in the preceding pages, the reader will
conclude that, as indeed results from physical laws, the line of
perpetual snow will be found to descend lower and lower on
advancing to the Pole. By the line of perpetual snow we mean, of
course, the limit above which a continual frost endures. Now, this
limit varies according to climate. The lower the temperature, the
lower the snow-line; the higher the temperature, the higher the snow-
line. In the Tropics it does not sink below the summits of the loftiest
mountains. Thus, at 1° from the Equator, where the mean
temperature at the sea-level is 84°.2, the snow-line must be sought
at the elevation of 15,203 feet; in 51° 30’ lat., the latitude of London,
it is usually found at about 5900 feet; in lat. 80°, where the mean
temperature is 33°.6, it sinks to 457 feet. These figures, however,
represent its normal elevations; but temperature, as we all know, is
greatly affected by local circumstances, and therefore the perpetual
snow-line varies greatly in height. Owing to causes already
explained, the snow-line in the Circumpolar Regions sinks to a very
low level; and, therefore, many mountainous regions or elevated
table-lands, such as Spitzbergen, Greenland, and Novaia Zemlaia,
which, in a more temperate climate, would bloom with emerald
slopes and waving woods, are covered with huge glaciers and fields
of ice, with apparently interminable reaches of untrodden snow.
It should be noted, however, that nowhere does the perpetual
snow-line descend to the water’s edge; nowhere has the spell of
winter absolutely crushed the life out of all vegetation. Lichens and
grasses, on which the reindeer gains its hardy subsistence, are
found near lat. 80°; even on the awful plains of Melville Island the
snow melts at midsummer; and the deserts of New Siberia afford
food for considerable numbers of lemmings. As far as man has
reached to the north, says a popular and accurate writer, vegetation,
when fostered by a sheltered situation and the refraction of solar
heat from the rocks, has everywhere been found to rise to a
considerable altitude above the level of the sea; and should there be
land at the North Pole, we may reasonably suppose that it is
destitute neither of animal nor vegetable life. It would be quite wrong
to conclude that the cold of winter invariably increases as we
approach the Pole, the temperature of a land being controlled by
many other causes besides its latitude. Even in the most northern
regions visited by man, the influence of the sea, particularly where
favoured by warm currents, considerably mitigates the severity of the
winter, while at the same time it diminishes the heat of summer. On
the other hand, the large continental tracts of Asia or America that
slope towards the Pole, possess a more rigorous winter and a fiercer
summer than many coast lands or islands situated far nearer to the
Pole. For example: the western shores of Novaia Zemlaia, fronting a
wide expanse of sea, have an average winter temperature of only
-4°, and a mean summer temperature which rises very little above
the freezing-point of water (+ 36° 30’); while Yakutsk, situated in the
centre of Siberia, and 20° nearer to the Equator, has a winter
temperature of -36° 6’, and a summer of + 66° 6’.
But though such are the physical conditions of the Polar Regions,
it must not be supposed that Nature wears only a severe and
repellent aspect. There is something beautiful in the vast expanse of
snowy plain when seen by the light of a cloudless moon; something
majestic in the colossal glaciers which fill up the remote Arctic
valleys; something picturesque in the numerous icebergs which
grandly sail down the dark Polar waters; something mysterious and
wonderful in the coruscations of the Aurora, which illuminates the
darkness of the winter nights with the glory of the celestial fires. The
law of compensation prevails in the far North, as in the glowing and
exuberant regions of the Tropics.

You might also like