Python Pandas Tutorial
Python Pandas Tutorial
In this tutorial, we will learn the various features of Python Pandas and how to use them
in practice.
Audience
This tutorial has been prepared for those who seek to learn the basics and various functions
of Pandas. It will be specifically useful for people working with data cleansing and analysis.
After completing this tutorial, you will find yourself at a moderate level of expertise from
where you can take yourself to higher levels of expertise.
Prerequisites
You should have a basic understanding of Computer Programming terminologies. A basic
understanding of any of the programming languages is a plus.
Pandas library uses most of the functionalities of NumPy. It is suggested that you go
through our tutorial on NumPy before proceeding with this tutorial. You can access it from:
NumPy Tutorial
All the content and graphics published in this e-book are the property of Tutorials Point (I)
Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish
any contents or a part of contents of this e-book in any manner without written consent
of the publisher.
We strive to update the contents of our website and tutorials as timely and as precisely as
possible, however, the contents may contain inaccuracies or errors. Tutorials Point (I) Pvt.
Ltd. provides no guarantee regarding the accuracy, timeliness or completeness of our
website or its contents including this tutorial. If you discover any errors on our website or
in this tutorial, please notify us at contact@tutorialspoint.com.
i
Python Pandas
Table of Contents
About the Tutorial ............................................................................................................................................ i
Audience ........................................................................................................................................................... i
Prerequisites ..................................................................................................................................................... i
Disclaimer & Copyright ..................................................................................................................................... i
Table of Contents ............................................................................................................................................ ii
4. Pandas — Series........................................................................................................................................ 6
pandas.Series................................................................................................................................................... 6
Create an Empty Series.................................................................................................................................... 7
Create a Series f ............................................................................................................................................... 7
rom ndarray ..................................................................................................................................................... 7
Create a Series f ............................................................................................................................................... 8
rom dict ........................................................................................................................................................... 8
Create a Series f ............................................................................................................................................... 9
rom Scalar ........................................................................................................................................................ 9
Accessing Data from Series with Position ..................................................................................................... 10
Retrieve Data Using Label (Index) ................................................................................................................. 11
ii
Python Pandas
iii
Python Pandas
iv
Python Pandas
v
1. Pandas – Introduction Python Pandas
In 2008, developer Wes McKinney started developing pandas when in need of high
performance, flexible tool for analysis of data.
Prior to Pandas, Python was majorly used for data munging and preparation. It had very
less contribution towards data analysis. Pandas solved this problem. Using Pandas, we can
accomplish five typical steps in the processing and analysis of data, regardless of the origin
of data — load, prepare, manipulate, model, and analyze.
Python with Pandas is used in a wide range of fields including academic and commercial
domains including finance, economics, Statistics, analytics, etc.
1
2. Pandas – Environment Setup Python Pandas
Standard Python distribution doesn't come bundled with Pandas module. A lightweight
alternative is to install NumPy using popular Python package installer, pip.
If you install Anaconda Python package, Pandas will be installed by default with the
following:
Windows
Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy
stack. It is also available for Linux and Mac.
Python (x,y) is a free Python distribution with SciPy stack and Spyder IDE for
Windows OS. (Downloadable from http://python-xy.github.io/)
Linux
Package managers of respective Linux distributions are used to install one or more
packages in SciPy stack.
2
3. Pandas – Introduction to Data Structures
Python Pandas
Series
DataFrame
Panel
These data structures are built on top of Numpy array, which means they are fast.
Building and handling two or more dimensional arrays is a tedious task, burden is placed
on the user to consider the orientation of the data set when writing functions. But using
Pandas data structures, the mental effort of the user is reduced.
For example, with tabular data (DataFrame) it is more semantically helpful to think of
the index (the rows) and the columns rather than axis 0 and axis 1.
Mutability
All Pandas data structures are value mutable (can be changed) and except Series all are
size mutable. Series is size immutable.
Note: DataFrame is widely used and one of the most important data structures. Panel is
very less used.
3
Python Pandas
Series
Series is a one-dimensional array like structure with homogeneous data. For example, the
following series is a collection of integers 10, 23, 56, …
10 23 56 17 52 61 73 90 26 72
Key Points
Homogeneous data
Size Immutable
DataFrame
DataFrame is a two-dimensional array with heterogeneous data. For example,
The table represents the data of a sales team of an organization with their overall
performance rating. The data is represented in rows and columns. Each column represents
an attribute and each row represents a person.
Column Type
Name String
Age Integer
Gender String
Rating Float
4
Python Pandas
Key Points
Heterogeneous data
Size Mutable
Data Mutable
Panel
Panel is a three-dimensional data structure with heterogeneous data. It is hard to
represent the panel in graphical representation. But a panel can be illustrated as a
container of DataFrame.
Key Points
Heterogeneous data
Size Mutable
Data Mutable
5
4. Pandas — Series Python Pandas
Series is a one-dimensional labeled array capable of holding data of any type (integer,
string, float, python objects, etc.). The axis labels are collectively called index.
pandas.Series
A pandas Series can be created using the following constructor:
data
1
data takes various forms like ndarray, list, constants
index
dtype
3
dtype is for data type. If None, data type will be inferred
copy
4
Copy data. Default False
Array
Dict
6
Python Pandas
Example
#import the pandas library and aliasing as pd
import pandas as pd
s = pd.Series()
print s
Example 1
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
data = np.array(['a','b','c','d'])
s = pd.Series(data)
print s
0 a
1 b
2 c
3 d
dtype: object
We did not pass any index, so by default, it assigned the indexes ranging from 0 to
len(data)-1, i.e., 0 to 3.
7
Python Pandas
Example 2
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
data = np.array(['a','b','c','d'])
s= pd.Series(data,index=[100,101,102,103])
print s
100 a
101 b
102 c
103 d
dtype: object
We passed the index values here. Now we can see the customized indexed values in the
output.
Example 1
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
data = {'a' : 0., 'b' : 1., 'c' : 2.}
s= pd.Series(data)
print s
a 0.0
b 1.0
c 2.0
dtype: float64
8
Python Pandas
Example 2
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
data = {'a' : 0., 'b' : 1., 'c' : 2.}
s = pd.Series(d, index=['b', 'c', 'd', 'a'])
print s
b 1.0
c 2.0
d NaN
a 0.0
dtype: float64
Observe: Index order is persisted and the missing element is filled with NaN (Not a
Number).
0 5
1 5
2 5
3 5
dtype: int64
9
Python Pandas
Example 1
Retrieve the first element. As we already know, the counting starts from zero for the array,
which means the first element is stored at zeroth position and so on.
import pandas as pd
s=pd.Series([1,2,3,4,5],index=['a','b','c','d','e'])
Example 2
Retrieve the first three elements in the Series. If a : is inserted in front of it, all items
from that index onwards will be extracted. If two parameters (with : between them) is
used, items between the two indexes (not including the stop index)
import pandas as pd
s=pd.Series([1,2,3,4,5],index=['a','b','c','d','e'])
a 1
b 2
c 3
dtype: int64
10
Python Pandas
Example 3
Retrieve the last three elements.
import pandas as pd
s=pd.Series([1,2,3,4,5],index=['a','b','c','d','e'])
c 3
d 4
e 5
dtype: int64
Example 1
Retrieve a single element using index label value.
import pandas as pd
s=pd.Series([1,2,3,4,5],index=['a','b','c','d','e'])
Example 2
Retrieve multiple elements using a list of index label values.
import pandas as pd
s=pd.Series([1,2,3,4,5],index=['a','b','c','d','e'])
#retrieve multiple elements
print s[['a','c','d']]
11
Python Pandas
a 1
c 3
d 4
dtype: int64
Example 3
If a label is not contained, an exception is raised.
import pandas as pd
s=pd.Series([1,2,3,4,5],index=['a','b','c','d','e'])
…
KeyError: 'f'
12
5. Pandas – DataFrame Python Pandas
A Data frame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion
in rows and columns.
Features of DataFrame
Potentially columns are of different types
Size – Mutable
Structure
Let us assume that we are creating a data frame with student’s data.
13
Python Pandas
pandas.DataFrame
A pandas DataFrame can be created using the following constructor:
data
1 data takes various forms like ndarray, series, map, lists, dict, constants and
also another DataFrame.
index
2 For the row labels, the Index to be used for the resulting frame is Optional
columns
3 For column labels, the optional default syntax is - np.arrange(n). This is only
true if no index is passed.
dtype
4
Data type of each column.
copy
5 This command (or whatever it is) is used for copying of data, if the default is
False.
Create DataFrame
A pandas DataFrame can be created using various inputs like:
Lists
dict
Series
Numpy ndarrays
Another DataFrame
In the subsequent sections of this chapter, we will see how to create a DataFrame using
these inputs.
14
Python Pandas
Example
#import the pandas library and aliasing as pd
import pandas as pd
df = pd.DataFrame()
print df
Empty DataFrame
Columns: []
Index: []
Example 1
import pandas as pd
data = [1,2,3,4,5]
df = pd.DataFrame(data)
print df
0
0 1
1 2
2 3
3 4
4 5
15
Python Pandas
Example 2
import pandas as pd
data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'])
print df
Name Age
0 Alex 10
1 Bob 12
2 Clarke 13
Example 3
import pandas as pd
data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'],dtype=float)
print df
Name Age
0 Alex 10.0
1 Bob 12.0
2 Clarke 13.0
Note: Observe, the dtype parameter changes the type of Age column to floating point.
If no index is passed, then by default, index will be range(n), where n is the array length.
Example 1
import pandas as pd
data = {'Name':['Tom', 'Jack', 'Steve', 'Ricky'],
'Age':[28,34,29,42]}
df= pd.DataFrame(data)
16
Python Pandas
print df
Age Name
0 28 Tom
1 34 Jack
2 29 Steve
3 42 Ricky
Note: Observe the values 0,1,2,3. They are the default index assigned to each using the
function range(n).
Example 2
Let us now create an indexed DataFrame using arrays.
import pandas as pd
data = {'Name':['Tom', 'Jack', 'Steve', 'Ricky'],
'Age':[28,34,29,42]}
df = pd.DataFrame(data, index=['rank1','rank2','rank3','rank4'])
print df
Age Name
rank1 28 Tom
rank2 34 Jack
rank3 29 Steve
rank4 42 Ricky
Example 1
The following example shows how to create a DataFrame by passing a list of dictionaries.
import pandas as pd
data = [{'a': 1, 'b': 2},
17
Python Pandas
a b c
0 1 2 NaN
1 5 10 20.0
Example 2
The following example shows how to create a DataFrame by passing a list of dictionaries
and the row indices.
import pandas as pd
data = [{'a': 1, 'b': 2},
{'a': 5, 'b': 10, 'c': 20}]
df = pd.DataFrame(data, index=['first', 'second'])
print df
a b c
first 1 2 NaN
second 5 10 20.0
Example 3
The following example shows how to create a DataFrame with a list of dictionaries, row
indices, and column indices.
import pandas as pd
data = [{'a': 1, 'b': 2},
{'a': 5, 'b': 10, 'c': 20}]
#With two column indices with one index with other name
df2 = pd.DataFrame(data, index=['first', 'second'], columns=['a', 'b1'])
18
Python Pandas
print df1
print df2
#df1 output
a b
first 1 2
second 5 10
#df2 output
a b1
first 1 NaN
second 5 NaN
Note: Observe, df2 DataFrame is created with a column index other than the dictionary
key; thus, appended the NaN’s in place. Whereas, df1 is created with column indices same
as dictionary keys, so NaN’s appended.
Example
import pandas as pd
df = pd.DataFrame(d)
print df
one two
a 1.0 1
b 2.0 2
c 3.0 3
d NaN 4
19
Python Pandas
Note: Observe, for the series one, there is no label ‘d’ passed, but in the result, for the d
label, NaN is appended with NaN.
Let us now understand column selection, addition, and deletion through examples.
Column Selection
We will understand this by selecting a column from the DataFrame.
Example
import pandas as pd
df = pd.DataFrame(d)
print df ['one']
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
Column Addition
We will understand this by adding a new column to an existing data frame.
Example
import pandas as pd
df = pd.DataFrame(d)
20
Python Pandas
print df
Column Deletion
Columns can be deleted or popped; let us take an example to understand how.
Example
# Using the previous DataFrame, we will delete a column
# using del function
import pandas as pd
21
Python Pandas
df = pd.DataFrame(d)
print ("Our dataframe is:")
print df
22
Python Pandas
Selection by Label
Rows can be selected by passing row label to a loc function.
import pandas as pd
df = pd.DataFrame(d)
print df.loc['b']
one 2.0
two 2.0
Name: b, dtype: float64
The result is a series with labels as column names of the DataFrame. And, the Name of
the series is the label with which it is retrieved.
import pandas as pd
df = pd.DataFrame(d)
print df.iloc[2]
one 3.0
two 3.0
Name: c, dtype: float64
23
Python Pandas
Slice Rows
Multiple rows can be selected using ‘ : ’ operator.
import pandas as pd
df = pd.DataFrame(d)
print df[2:4]
one two
c 3.0 3
d NaN 4
Addition of Rows
Add new rows to a DataFrame using the append function. This function will append the
rows at the end.
import pandas as pd
df = df.append(df2)
print df
a b
0 1 2
1 3 4
0 5 6
1 7 8
24
Python Pandas
Deletion of Rows
Use index label to delete or drop rows from a DataFrame. If label is duplicated, then
multiple rows will be dropped.
If you observe, in the above example, the labels are duplicate. Let us drop a label and will
see how many rows will get dropped.
import pandas as pd
df = df.append(df2)
print df
a b
1 3 4
1 7 8
In the above example, two rows were dropped because those two contain the same label 0.
25
6. Pandas – Panel Python Pandas
A panel is a 3D container of data. The term Panel data is derived from econometrics and
is partially responsible for the name pandas: pan(el)-da(ta)-s.
The names for the 3 axes are intended to give some semantic meaning to describing
operations involving panel data. They are:
pandas.Panel()
A Panel can be created using the following constructor:
Parameter Description
Data takes various forms like ndarray, series, map, lists, dict, constants
data
and also another DataFrame
items axis=0
major_axis axis=1
minor_axis axis=2
Create Panel
A Panel can be created using multiple ways like -
From ndarrays
26
Python Pandas
From 3D ndarray
# creating an empty panel
import pandas as pd
import numpy as np
data = np.random.rand(2,4,5)
p = pd.Panel(data)
print p
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 5 (minor_axis)
Items axis: 0 to 1
Major_axis axis: 0 to 3
Minor_axis axis: 0 to 4
Note: Observe the dimensions of the empty panel and the above panel, all the objects
are different.
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 0 to 3
Minor_axis axis: 0 to 2
27
Python Pandas
<class 'pandas.core.panel.Panel'>
Dimensions: 0 (items) x 0 (major_axis) x 0 (minor_axis)
Items axis: None
Major_axis axis: None
Minor_axis axis: None
Items
Major_axis
Minor_axis
Using Items
# creating an empty panel
import pandas as pd
import numpy as np
data = {'Item1' : pd.DataFrame(np.random.randn(4, 3)),
'Item2' : pd.DataFrame(np.random.randn(4, 2))}
p = pd.Panel(data)
print p['Item1']
0 1 2
0 0.488224 -0.128637 0.930817
1 0.417497 0.896681 0.576657
2 -2.775266 0.571668 0.290082
3 -0.400538 -0.144234 1.110535
28
Python Pandas
We have two items, and we retrieved item1. The result is a DataFrame with 4 rows and 3
columns, which are the Major_axis and Minor_axis dimensions.
Using major_axis
Data can be accessed using the method panel.major_axis(index).
Item1 Item2
0 0.417497 0.748412
1 0.896681 -0.557322
2 0.576657 NaN
Using minor_axis
Data can be accessed using the method panel.major_axis(index).
Item1 Item2
0 -0.128637 -1.047032
1 0.896681 -0.557322
2 0.571668 0.431953
3 -0.144234 1.302466
29
7. Pandas – Basic Functionality Python Pandas
By now, we learnt about the three Pandas DataStructures and how to create them. We
will majorly focus on the DataFrame objects because of its importance in the real time
data processing and also discuss a few other DataStructures.
Let us now create a Series and see all the above tabulated attributes operation.
Example
import pandas as pd
import numpy as np
0 0.967853
1 -0.148368
2 -1.395906
3 -1.758394
dtype: float64
30
Python Pandas
axes
Returns the list of the labels of the series.
import pandas as pd
import numpy as np
The above result is a compact format of a list of values from 0 to 5, i.e., [0,1,2,3,4].
empty
Returns the Boolean value saying whether the Object is empty or not. True indicates that
the object is empty.
import pandas as pd
import numpy as np
ndim
Returns the number of dimensions of the object. By definition, a Series is a 1D data
structure, so it returns 1.
import pandas as pd
import numpy as np
31
Python Pandas
0 0.175898
1 0.166197
2 -0.609712
3 -1.377000
dtype: float64
size
Returns the size(length) of the series.
import pandas as pd
import numpy as np
0 3.078058
1 -1.207803
dtype: float64
32
Python Pandas
values
Returns the actual data in the series as an array.
import pandas as pd
import numpy as np
0 1.787373
1 -0.605159
2 0.180477
3 -0.140922
dtype: float64
head() returns the first n rows(observe the index values). The default number of elements
to display is five, but you may pass a custom number.
import pandas as pd
import numpy as np
33
Python Pandas
tail() returns the last n rows(observe the index values). The default number of elements
to display is five, but you may pass a custom number.
import pandas as pd
import numpy as np
34
Python Pandas
Attribute or
S.No Description
Method
Returns a list with the row axis labels and column axis
2 axes
labels as the only members.
Let us now create a DataFrame and see all how the above mentioned attributes operate.
Example
import pandas as pd
import numpy as np
35
Python Pandas
#Create a DataFrame
df = pd.DataFrame(d)
print ("Our data series is:")
print df
T (Transpose)
Returns the transpose of the DataFrame. The rows and columns will interchange.
import pandas as pd
import numpy as np
# Create a DataFrame
df = pd.DataFrame(d)
print ("The transpose of the data series is:")
print df.T
36
Python Pandas
axes
Returns the list of row axis labels and column axis labels.
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Row axis labels and column axis labels are:")
print df.axes
dtypes
Returns the data type of each column.
import pandas as pd
import numpy as np
37
Python Pandas
#Create a DataFrame
df = pd.DataFrame(d)
print ("The data types of each column are:")
print df.dtypes
empty
Returns the Boolean value saying whether the Object is empty or not; True indicates that
the object is empty.
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Is the object empty?")
print df.empty
ndim
Returns the number of dimensions of the object. By definition, DataFrame is a 2D object.
38
Python Pandas
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Our object is:")
print df
print ("The dimension of the object is:")
print df.ndim
shape
39
Python Pandas
Returns a tuple representing the dimensionality of the DataFrame. Tuple (a,b), where a
represents the number of rows and b represents the number of columns.
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Our object is:")
print df
print ("The shape of the object is:")
print df.shape
size
40
Python Pandas
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Our object is:")
print df
print ("The total number of elements in our object is:")
print df.size
values
Returns the actual data in the DataFrame as an NDarray.
41
Python Pandas
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Our object is:")
print df
print ("The actual data in our data frame is:")
print df.values
42
Python Pandas
head() returns the first n rows (observe the index values). The default number of
elements to display is five, but you may pass a custom number.
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Our data frame is:")
print df
print ("The first two rows of the data frame is:")
print df.head(2)
tail() returns the last n rows (observe the index values). The default number of elements
to display is five, but you may pass a custom number.
43
Python Pandas
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print ("Our data frame is:")
print df
print ("The last two rows of the data frame is:")
print df.tail(2)
44
8. Pandas – Descriptive Statistics Python Pandas
A large number of methods collectively compute descriptive statistics and other related
operations on DataFrame. Most of these are aggregations like sum(), mean(), but some
of them, like sumsum(), produce an object of the same size. Generally speaking, these
methods take an axis argument, just like ndarray.{sum, std, ...}, but the axis can be
specified by name or integer:
Let us create a DataFrame and use this object throughout this chapter for all the
operations.
Example
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print df
45
Python Pandas
9 30 Gasper 4.80
10 51 Betina 4.10
11 46 Andres 3.65
sum()
Returns the sum of the values for the requested axis. By default, axis is index (axis=0).
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print df.sum()
Age 382
Name TomJamesRickyVinSteveSmithJackLeeDavidGasperBe...
Rating 44.92
dtype: object
axis=1
This syntax will give the output as shown below.
import pandas as pd
import numpy as np
46
Python Pandas
'Rating':pd.Series([4.23,3.24,3.98,2.56,3.20,4.6,3.8,3.78,2.98,4.80,4.10,3.65])
}
#Create a DataFrame
df = pd.DataFrame(d)
print df.sum(1)
0 29.23
1 29.24
2 28.98
3 25.56
4 33.20
5 33.60
6 26.80
7 37.78
8 42.98
9 34.80
10 55.10
11 49.65
dtype: float64
mean()
Returns the average value.
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
47
Python Pandas
print df.mean()
Age 31.833333
Rating 3.743333
dtype: float64
std()
Returns the Bressel standard deviation of the numerical columns.
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print df.std()
Age 9.232682
Rating 0.661628
dtype: float64
48
Python Pandas
Note: Since DataFrame is a Heterogeneous data structure. Generic operations don’t work
with all functions.
Functions like sum(), cumsum() work with both numeric and character (or) string
data elements without any error. Though n practice, character aggregations are
never used generally, these functions do not throw any exception.
Functions like abs(), cumprod() throw exception when the DataFrame contains
character or string data because such operations cannot be performed.
Summarizing Data
The describe() function computes a summary of statistics pertaining to the DataFrame
columns.
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print df.describe()
49
Python Pandas
Age Rating
count 12.000000 12.000000
mean 31.833333 3.743333
std 9.232682 0.661628
min 23.000000 2.560000
25% 25.000000 3.230000
50% 29.500000 3.790000
75% 35.500000 4.132500
max 51.000000 4.800000
This function gives the mean, std and IQR values. And, function excludes the character
columns and given summary about numeric columns. 'include' is the argument which is
used to pass necessary information regarding what columns need to be considered for
summarizing. Takes the list of values; by default, 'number'.
all - Summarizes all columns together (Should not pass it as a list value)
Now, use the following statement in the program and check the output:
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print df.describe(include=['object'])
50
Python Pandas
Name
count 12
unique 12
top Ricky
freq 1
import pandas as pd
import numpy as np
#Create a DataFrame
df = pd.DataFrame(d)
print df. describe(include='all')
51
Python Pandas
52
9. Pandas – Function Application Python Pandas
To apply your own or another library’s functions to Pandas objects, you should be aware
of the three important methods. The methods have been discussed below. The appropriate
method to use depends on whether your function expects to operate on an
entire DataFrame, row- or column-wise, or elementwise.
For example, add a value 2 to all the elements in the DataFrame. Then,
adder function
The adder function adds two numeric values as parameters and returns the sum.
def adder(ele1,ele2):
return ele1+ele2
We will now use the custom function to conduct operation on the DataFrame.
df = pd.DataFrame(np.random.randn(5,3),columns=['col1','col2','col3'])
df.pipe(adder,2)
import pandas as pd
import numpy as np
def adder(ele1,ele2):
return ele1+ele2
df = pd.DataFrame(np.random.randn(5,3),columns=['col1','col2','col3'])
df.pipe(adder,2)
print df
53
Python Pandas
Example 1
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(5,3),columns=['col1','col2','col3'])
df.apply(np.mean)
print df
col1 0.260937
col2 -0.226256
col3 0.294514
dtype: float64
Example 2
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(5,3),columns=['col1','col2','col3'])
df.apply(np.mean,axis=1)
print df
54
Python Pandas
0 -0.031415
1 0.866156
2 0.024317
3 -0.694998
4 0.384599
dtype: float64
Example 3
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(5,3),columns=['col1','col2','col3'])
df.apply(lambda x: x.max() - x.min())
print df
col1 0.179059
col2 2.338095
col3 2.771351
dtype: float64
Example 1
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(5,3),columns=['col1','col2','col3'])
# My custom function
df['col1'].map(lambda x:x*100)
print df
55
Python Pandas
0 17.670426
1 22.237846
2 24.109576
3 35.576312
4 30.874333
Name: col1, dtype: float64
Example 2
import pandas as pd
import numpy as np
# My custom function
df = pd.DataFrame(np.random.randn(5,3),columns=['col1','col2','col3'])
df.applymap(lambda x:x*100)
print df
56
10. Pandas – Reindexing Python Pandas
Reindexing changes the row labels and column labels of a DataFrame. To reindex means
to conform the data to match a given set of labels along a particular axis.
Insert missing value (NA) markers in label locations where no data for the label
existed.
Example
import pandas as pd
import numpy as np
N=20
df = pd.DataFrame({
'A': pd.date_range(start='2016-01-01',periods=N,freq='D'),
'x': np.linspace(0,stop=N-1,num=N),
'y': np.random.rand(N),
'C': np.random.choice(['Low','Medium','High'],N).tolist(),
'D': np.random.normal(100, 10, size=(N)).tolist()
})
print df_reindexed
A C B
0 2016-01-01 Low NaN
2 2016-01-03 High NaN
5 2016-01-06 Low NaN
57
Python Pandas
Example
import pandas as pd
import numpy as np
df1=pd.DataFrame(np.random.randn(10,3),columns=['col1','col2','col3'])
df2=pd.DataFrame(np.random.randn(7,3),columns=['col1','col2','col3'])
df1 = df1.reindex_like(df2)
print df1
Note: Here, the df1 DataFrame is altered and reindexed like df2. The column names
should be matched or else NAN will be added for the entire column label.
58
Python Pandas
Example
import pandas as pd
import numpy as np
df1=pd.DataFrame(np.random.randn(6,3),columns=['col1','col2','col3'])
df2=pd.DataFrame(np.random.randn(2,3),columns=['col1','col2','col3'])
# Padding NAN's
print df2.reindex_like(df1)
59
Python Pandas
Example
import pandas as pd
import numpy as np
df1=pd.DataFrame(np.random.randn(6,3),columns=['col1','col2','col3'])
df2=pd.DataFrame(np.random.randn(2,3),columns=['col1','col2','col3'])
# Padding NAN's
print df2.reindex_like(df1)
60
Python Pandas
Note: Observe, only the 7th row is filled by the preceding 6th row. Then, the rows are left
as they are.
Renaming
The rename() method allows you to relabel an axis based on some mapping (a dict or
Series) or an arbitrary function.
import pandas as pd
import numpy as np
df1=pd.DataFrame(np.random.randn(6,3),columns=['col1','col2','col3'])
print df1
61
11. Pandas – Iteration Python Pandas
The behavior of basic iteration over Pandas objects depends on the type. When iterating
over a Series, it is regarded as array-like, and basic iteration produces the values. Other
data structures, like DataFrame and Panel, follow the dict-like convention of iterating
over the keys of the objects.
Series: values
Iterating a DataFrame
Iterating a DataFrame gives column names. Let us consider the following example to
understand the same.
import pandas as pd
import numpy as np
N=20
df = pd.DataFrame({
'A': pd.date_range(start='2016-01-01',periods=N,freq='D'),
'x': np.linspace(0,stop=N-1,num=N),
'y': np.random.rand(N),
'C': np.random.choice(['Low','Medium','High'],N).tolist(),
'D': np.random.normal(100, 10, size=(N)).tolist()
})
for col in df:
print col
A
C
D
x
y
62
Python Pandas
To iterate over the rows of the DataFrame, we can use the following functions:
iteritems()
Iterates over each column as key, value pair with label as key and column value as a
Series object.
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(4,3),columns=['col1','col2','col3'])
for key,value in df.iteritems():
print key,value
col1 0 0.802390
1 0.324060
2 0.256811
3 0.839186
Name: col1, dtype: float64
col2 0 1.624313
1 -1.033582
2 1.796663
3 1.856277
Name: col2, dtype: float64
col3 0 -0.022142
1 -0.230820
2 1.160691
3 -0.830279
Name: col3, dtype: float64
63
Python Pandas
iterrows()
iterrows() returns the iterator yielding each index value along with a series containing the
data in each row.
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(4,3),columns=['col1','col2','col3'])
for row_index,row in df.iterrows():
print row_index,row
0 col1 1.529759
col2 0.762811
col3 -0.634691
Name: 0, dtype: float64
1 col1 -0.944087
col2 1.420919
col3 -0.507895
Name: 1, dtype: float64
2 col1 -0.077287
col2 -0.858556
col3 -0.663385
Name: 2, dtype: float64
Note: Because iterrows() iterate over the rows, it doesn't preserve the data type across
the row. 0,1,2 are the row indices and col1,col2,col3 are column indices.
itertuples()
itertuples() method will return an iterator yielding a named tuple for each row in the
DataFrame. The first element of the tuple will be the row’s corresponding index value,
while the remaining values are the row values.
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(4,3),columns=['col1','col2','col3'])
for row in df.itertuples():
print row
64
Python Pandas
Note: Do not try to modify any object while iterating. Iterating is meant for reading and
the iterator returns a copy of the original object (a view), thus the changes will not reflect
on the original object.
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(4,3),columns=['col1','col2','col3'])
65
12. Pandas – Sorting Python Pandas
By label
By Actual Value
import pandas as pd
import numpy as np
unsorted_df=pd.DataFrame(np.random.randn(10,2),index=[1,4,6,2,3,5,9,8,0,7],colu
mns=['col2','col1'])
print unsorted_df
col2 col1
1 -2.063177 0.537527
4 0.142932 -0.684884
6 0.012667 -0.389340
2 -0.548797 1.848743
3 -1.044160 0.837381
5 0.385605 1.300185
9 1.031425 -1.002967
8 -0.407374 -0.435142
0 2.237453 -1.067139
7 -1.445831 -1.701035
In unsorted_df, the lables and the values are unsorted. Let us see how these can be
sorted.
By Label
Using the sort_index() method, by passing the axis arguments and the order of sorting,
DataFrame can be sorted. By default, sorting is done on row labels in ascending order.
import pandas as pd
import numpy as np
66
Python Pandas
unsorted_df=pd.DataFrame(np.random.randn(10,2),index=[1,4,6,2,3,5,9,8,0,7],colu
mns=['col2','col1'])
sorted_df=unsorted_df.sort_index()
print sorted_df
col2 col1
0 0.208464 0.627037
1 0.641004 0.331352
2 -0.038067 -0.464730
3 -0.638456 -0.021466
4 0.014646 -0.737438
5 -0.290761 -1.669827
6 -0.797303 -0.018737
7 0.525753 1.628921
8 -0.567031 0.775951
9 0.060724 -0.322425
Order of Sorting
By passing the Boolean value to ascending parameter, the order of the sorting can be
controlled. Let us consider the following example to understand the same.
import pandas as pd
import numpy as np
unsorted_df=pd.DataFrame(np.random.randn(10,2),index=[1,4,6,2,3,5,9,8,0,7],colu
mns=['col2','col1'])
sorted_df=unsorted_df.sort_index(ascending=False)
print sorted_df
col2 col1
9 0.825697 0.374463
8 -1.699509 0.510373
7 -0.581378 0.622958
6 -0.202951 0.954300
67
Python Pandas
5 -1.289321 -1.551250
4 1.302561 0.851385
3 -0.157915 -0.388659
2 -1.222295 0.166609
1 0.584890 -0.291048
0 0.668444 -0.061294
import pandas as pd
import numpy as np
unsorted_df=pd.DataFrame(np.random.randn(10,2),index=[1,4,6,2,3,5,9,8,0,7],colu
mns=['col2','col1'])
sorted_df=unsorted_df.sort_index(axis=1)
print sorted_df
col1 col2
1 -0.291048 0.584890
4 0.851385 1.302561
6 0.954300 -0.202951
2 0.166609 -1.222295
3 -0.388659 -0.157915
5 -1.551250 -1.289321
9 0.374463 0.825697
8 0.510373 -1.699509
0 -0.061294 0.668444
7 0.622958 -0.581378
68
Python Pandas
By Value
Like index sorting, sort_values() is the method for sorting by values. It accepts a 'by'
argument which will use the column name of the DataFrame with which the values are to
be sorted.
import pandas as pd
import numpy as np
unsorted_df= pd.DataFrame({'col1':[2,1,1,1],'col2':[1,3,2,4]})
sorted_df=unsorted_df.sort_values(by='col1')
print sorted_df
col1 col2
1 1 3
2 1 2
3 1 4
0 2 1
Observe, col1 values are sorted and the respective col2 value and row index will alter
along with col1. Thus, they look unsorted.
import pandas as pd
import numpy as np
unsorted_df= pd.DataFrame({'col1':[2,1,1,1],'col2':[1,3,2,4]})
sorted_df=unsorted_df.sort_values(by=['col1','col2'])
print sorted_df
col1 col2
2 1 2
1 1 3
3 1 4
0 2 1
69
Python Pandas
Sorting Algorithm
sort_values() provides a provision to choose the algorithm from mergesort, heapsort
and quicksort. Mergesort is the only stable algorithm.
import pandas as pd
import numpy as np
unsorted_df= pd.DataFrame({'col1':[2,1,1,1],'col2':[1,3,2,4]})
sorted_df=unsorted_df.sort_values(by='col1' ,kind='mergesort')
print sorted_df
col1 col2
1 1 3
2 1 2
3 1 4
0 2 1
70
13. Pandas – Working with Text DataPython Pandas
In this chapter, we will discuss the string operations with our basic Series/Index. In the
subsequent chapters, we will learn how to apply these string functions on the DataFrame.
Pandas provides a set of string functions which make it easy to operate on string data.
Most importantly, these functions ignore (or exclude) missing/NaN values.
Almost, all of these methods work with Python string functions (refer:
https://docs.python.org/3/library/stdtypes.html#string-methods). So, convert the Series
Object to String Object and then perform the operation.
14 find(pattern) Returns the first position of the first occurrence of the pattern.
71
Python Pandas
Let us now create a Series and see how all the above functions work.
import pandas as pd
import numpy as np
print s
0 Tom
1 William Rick
2 John
3 Alber@t
4 NaN
5 1234
6 Steve Smith
dtype: object
lower()
import pandas as pd
import numpy as np
print s.str.lower()
0 tom
1 william rick
2 john
72
Python Pandas
3 alber@t
4 NaN
5 1234
6 steve smith
dtype: object
upper()
import pandas as pd
import numpy as np
print s.str.upper()
0 TOM
1 WILLIAM RICK
2 JOHN
3 ALBER@T
4 NaN
5 1234
6 STEVE SMITH
dtype: object
len()
import pandas as pd
import numpy as np
73
Python Pandas
0 3.0
1 14.0
2 5.0
3 7.0
4 NaN
5 4.0
6 11.0
dtype: float64
strip()
import pandas as pd
import numpy as np
print s
print ("After Stripping:")
print s.str.strip()
0 Tom
1 William Rick
2 John
3 Alber@t
dtype: object
After Stripping:
0 Tom
1 William Rick
2 John
3 Alber@t
dtype: object
74
Python Pandas
split(pattern)
import pandas as pd
import numpy as np
print s
0 Tom
1 William Rick
2 John
3 Alber@t
dtype: object
Split Pattern:
0 [Tom, , , , , , , , , , ]
1 [, , , , , William, Rick]
2 [John]
3 [Alber@t]
dtype: object
cat(sep=pattern)
import pandas as pd
import numpy as np
print s.str.cat(sep='_')
75
Python Pandas
get_dummies()
import pandas as pd
import numpy as np
print s.str.get_dummies()
contains ()
import pandas as pd
0 True
1 True
2 False
3 False
dtype: bool
replace(a,b)
import pandas as pd
s = pd.Series(['Tom ', ' William Rick', 'John', 'Alber@t'])
print s
print ("After replacing @ with $:")
print s.str.replace('@','$')
76
Python Pandas
0 Tom
1 William Rick
2 John
3 Alber@t
dtype: object
repeat(value)
import pandas as pd
print s.str.repeat(2)
0 Tom Tom
1 William Rick William Rick
2 JohnJohn
3 Alber@tAlber@t
dtype: object
count(pattern)
import pandas as pd
77
Python Pandas
startswith(pattern)
import pandas as pd
0 True
1 False
2 False
3 False
dtype: bool
endswith(pattern)
import pandas as pd
s = pd.Series(['Tom ', ' William Rick', 'John', 'Alber@t'])
print ("Strings that end with 't':")
print s.str.endswith('t')
78
Python Pandas
find(pattern)
import pandas as pd
print s.str.find('e')
0 -1
1 -1
2 -1
3 3
dtype: int64
findall(pattern)
import pandas as pd
0 []
1 []
2 []
3 [e]
dtype: object
Null list([]) indicates that there is no such pattern available in the element.
swapcase()
import pandas as pd
print s.str.swapcase()
79
Python Pandas
0 tOM
1 wILLIAM rICK
2 jOHN
3 aLBER@T
dtype: object
islower()
import pandas as pd
0 False
1 False
2 False
3 False
dtype: bool
isupper()
import pandas as pd
print s.str.isupper()
0 False
1 False
2 False
3 False
dtype: bool
80
Python Pandas
isnumeric()
import pandas as pd
print s.str.isnumeric()
0 False
1 False
2 False
3 False
dtype: bool
81
14. Pandas – Options and Customization
Python Pandas
Pandas provide API to customize some aspects of its behavior, display is being mostly
used.
get_option()
set_option()
reset_option()
describe_option()
option_context()
get_option(param)
get_option takes a single parameter and returns the value as given in the output below -
display.max_rows
Displays the default number of value. Interpreter reads this value and displays the rows
with this value as upper limit to display.
import pandas as pd
print pd.get_option("display.max_rows")
60
display.max_columns
Displays the default number of value. Interpreter reads this value and displays the rows
with this value as upper limit to display.
import pandas as pd
print pd.get_option("display.max_columns")
20
82
Python Pandas
set_option(param,value)
set_option takes two arguments and sets the value to the parameter as shown below -
display.max_rows
Using set_option(), we can change the default number of rows to be displayed.
import pandas as pd
pd.set_option("display.max_rows",80)
print pd.get_option("display.max_rows")
80
display.max_columns
import pandas as pd
pd.set_option("display.max_columns",30)
print pd.get_option("display.max_columns")
30
reset_option(param)
reset_option takes an argument and sets the value back to the default value.
display.max_rows
Using reset_option(), we can change the value back to the default number of rows to be
displayed.
import pandas as pd
pd.reset_option("display.max_rows")
print pd.get_option("display.max_rows")
83
Python Pandas
60
describe_option(param)
describe_option prints the description of the argument.
display.max_rows
Using reset_option(), we can change the value back to the default number of rows to be
displayed.
import pandas as pd
pd.describe_option("display.max_rows")
display.max_rows : int
If max_rows is exceeded, switch to truncate view. Depending on
'large_repr', objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
option_context()
option_context context manager is used to set the option in with statement temporarily.
Option values are restored automatically when you exit the with block:
display.max_rows
Using option_context(), we can set the value temporarily.
import pandas as pd
with pd.option_context("display.max_rows",10):
print(pd.get_option("display.max_rows"))
84
Python Pandas
print(pd.get_option("display.max_rows"))
10
60
See, the difference between the first and the second print statements. The first statement
prints the value set by option_context() which is temporary within the with context
itself. After the with context, the second print statement prints the configured value.
85
15. Pandas – Indexing and Selecting Data
Python Pandas
In this chapter, we will discuss how to slice and dice the date and generally get the subset
of pandas object.
The Python and NumPy indexing operators "[ ]" and attribute operator "." provide quick
and easy access to Pandas’ data structures across a wide range of use cases. However,
since the type of the data to be accessed isn’t known in advance, directly using standard
operators has some optimization limits. For production code, we recommend that you take
advantage of the optimized pandas data access methods explained in this chapter.
Pandas now supports three types of Multi-axes indexing; the three types are mentioned
in the following table:
Indexing Description
.loc()
Pandas provide various methods to have purely label based indexing. When slicing, the
start bound is also included. Integers are valid labels, but they refer to the label and not
the position.
A list of labels
A slice object
A Boolean array
loc takes two single/list/range operator separated by ','. The first one indicates the row
and the second one indicates columns.
86
Python Pandas
Example 1
#import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4),
index=['a','b','c','d','e','f','g','h'], columns=['A', 'B', 'C', 'D'])
a 0.391548
b -0.070649
c -0.317212
d -2.162406
e 2.202797
f 0.613709
g 1.050559
h 1.122680
Name: A, dtype: float64
Example 2
# import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4),
index=['a','b','c','d','e','f','g','h'], columns=['A', 'B', 'C', 'D'])
87
Python Pandas
A C
a 0.391548 0.745623
b -0.070649 1.620406
c -0.317212 1.448365
d -2.162406 -0.873557
e 2.202797 0.528067
f 0.613709 0.286414
g 1.050559 0.216526
h 1.122680 -1.621420
Example 3
# import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4),
index=['a','b','c','d','e','f','g','h'], columns=['A', 'B', 'C', 'D'])
A C
a 0.391548 0.745623
b -0.070649 1.620406
f 0.613709 0.286414
h 1.122680 -1.621420
Example 4
# import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4),
index=['a','b','c','d','e','f','g','h'], columns=['A', 'B', 'C', 'D'])
88
Python Pandas
A B C D
a 0.391548 -0.224297 0.745623 0.054301
b -0.070649 -0.880130 1.620406 1.419743
c -0.317212 -1.929698 1.448365 0.616899
d -2.162406 0.614256 -0.873557 1.093958
e 2.202797 -2.315915 0.528067 0.612482
f 0.613709 -0.157674 0.286414 -0.500517
g 1.050559 -2.272099 0.216526 0.928449
h 1.122680 0.324368 -1.621420 -0.741470
Example 5
# import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4),
index=['a','b','c','d','e','f','g','h'], columns=['A', 'B', 'C', 'D'])
A False
B True
C False
D False
Name: a, dtype: bool
89
Python Pandas
.iloc()
Pandas provide various methods in order to get purely integer based indexing. Like python
and numpy, these are 0-based indexing.
An Integer
A list of integers
A range of values
Example 1
# import the pandas library and aliasing as pd
import pandas as pd
import numpy as np
A B C D
0 0.699435 0.256239 -1.270702 -0.645195
1 -0.685354 0.890791 -0.813012 0.631615
2 -0.783192 -0.531378 0.025070 0.230806
3 0.539042 -1.284314 0.826977 -0.026251
Example 2
import pandas as pd
import numpy as np
# Integer slicing
print df.iloc[:4]
print df.iloc[1:5, 2:4]
90
Python Pandas
A B C D
0 0.699435 0.256239 -1.270702 -0.645195
1 -0.685354 0.890791 -0.813012 0.631615
2 -0.783192 -0.531378 0.025070 0.230806
3 0.539042 -1.284314 0.826977 -0.026251
C D
1 -0.813012 0.631615
2 0.025070 0.230806
3 0.826977 -0.026251
4 1.423332 1.130568
Example 3
import pandas as pd
import numpy as np
B D
1 0.890791 0.631615
3 -1.284314 -0.026251
5 -0.512888 -0.518930
A B C D
1 -0.685354 0.890791 -0.813012 0.631615
2 -0.783192 -0.531378 0.025070 0.230806
B C
0 0.256239 -1.270702
91
Python Pandas
1 0.890791 -0.813012
2 -0.531378 0.025070
3 -1.284314 0.826977
4 -0.460729 1.423332
5 -0.512888 0.581409
6 -1.204853 0.098060
7 -0.947857 0.641358
.ix()
Besides pure label based and integer based, Pandas provides a hybrid method for
selections and subsetting the object using the .ix() operator.
Example 1
import pandas as pd
import numpy as np
# Integer slicing
print df.ix[:4]
A B C D
0 0.699435 0.256239 -1.270702 -0.645195
1 -0.685354 0.890791 -0.813012 0.631615
2 -0.783192 -0.531378 0.025070 0.230806
3 0.539042 -1.284314 0.826977 -0.026251
Example 2
import pandas as pd
import numpy as np
92
Python Pandas
0 0.699435
1 -0.685354
2 -0.783192
3 0.539042
4 -1.044209
5 -1.415411
6 1.062095
7 0.994204
Name: A, dtype: float64
Use of Notations
Getting values from the Pandas object with Multi-axes indexing uses the following
notation:
Note: .iloc() & .ix() applies the same indexing options and Return value.
Let us now see how each operation can be performed on the DataFrame object. We will
use the basic indexing operator '[ ]':
Example 1
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
print df['A']
0 -0.478893
1 0.391931
2 0.336825
3 -1.055102
93
Python Pandas
4 -0.165218
5 -0.328641
6 0.567721
7 -0.759399
Name: A, dtype: float64
Example 2
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
print df[['A','B']]
A B
0 -0.478893 -0.606311
1 0.391931 -0.949025
2 0.336825 0.093717
3 -1.055102 -0.012944
4 -0.165218 1.550310
5 -0.328641 -0.226363
6 0.567721 -0.312585
7 -0.759399 -0.372696
Example 3
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
print df[2:2]
A B C D
0 -0.478893 -0.606311 -1.455019 -1.228044
1 0.391931 -0.949025 -0.155288 -0.406476
94
Python Pandas
Attribute Access
Columns can be selected using the attribute operator '.'.
Example
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
print df.A
0 -0.478893
1 0.391931
2 0.336825
3 -1.055102
4 -0.165218
5 -0.328641
6 0.567721
7 -0.759399
Name: A, dtype: float64
95
16. Pandas – Statistical Functions Python Pandas
Statistical methods help in the understanding and analyzing the behavior of data. We will
now learn a few statistical functions, which we can apply on Pandas objects.
Percent_change
Series, DatFrames and Panel, all have the function pct_change(). This function compares
every element with its prior element and computes the change percentage.
import pandas as pd
import numpy as np
s = pd.Series([1,2,3,4,5,4])
print s.pct_change()
df = pd.DataFrame(np.random.randn(5, 2))
print df.pct_change()
0 NaN
1 1.000000
2 0.500000
3 0.333333
4 0.250000
5 -0.200000
dtype: float64
0 1
0 NaN NaN
1 -15.151902 0.174730
2 -0.746374 -1.449088
3 -3.582229 -3.165836
4 15.601150 -1.860434
By default, the pct_change() operates on columns; if you want to apply the same row
wise, then use axis=1 argument.
96
Python Pandas
Covariance
Covariance is applied on series data. The Series object has a method cov to compute
covariance between series objects. NA will be excluded automatically.
Cov Series
import pandas as pd
import numpy as np
s1 = pd.Series(np.random.randn(10))
s2 = pd.Series(np.random.randn(10))
print s1.cov(s2)
-0.12978405324
Covariance method when applied on a DataFrame, computes cov between all the columns.
import pandas as pd
import numpy as np
frame = pd.DataFrame(np.random.randn(10, 5), columns=['a', 'b', 'c', 'd', 'e'])
print frame['a'].cov(frame['b'])
print frame.cov()
-0.58312921152741437
a b c d e
a 1.780628 -0.583129 -0.185575 0.003679 -0.136558
b -0.583129 1.297011 0.136530 -0.523719 0.251064
c -0.185575 0.136530 0.915227 -0.053881 -0.058926
d 0.003679 -0.523719 -0.053881 1.521426 -0.487694
e -0.136558 0.251064 -0.058926 -0.487694 0.960761
Note: Observe the cov between a and b column in the first statement and the same is
the value returned by cov on DataFrame.
97
Python Pandas
Correlation
Correlation shows the linear relationship between any two array of values (series). There
are multiple methods to compute the correlation like pearson(default), spearman and
kendall.
import pandas as pd
import numpy as np
frame = pd.DataFrame(np.random.randn(10, 5), columns=['a', 'b', 'c', 'd', 'e'])
print frame['a'].corr(frame['b'])
print frame.corr()
-0.383712785514
a b c d e
a 1.000000 -0.383713 -0.145368 0.002235 -0.104405
b -0.383713 1.000000 0.125311 -0.372821 0.224908
c -0.145368 0.125311 1.000000 -0.045661 -0.062840
d 0.002235 -0.372821 -0.045661 1.000000 -0.403380
e -0.104405 0.224908 -0.062840 -0.403380 1.000000
Data Ranking
Data Ranking produces ranking for each element in the array of elements. In case of ties,
assigns the mean rank.
import pandas as pd
import numpy as np
s = pd.Series(np.random.np.random.randn(5), index=list('abcde'))
print s.rank()
98
Python Pandas
a 1.0
b 3.5
c 2.0
d 3.5
e 5.0
dtype: float64
Rank optionally takes a parameter ascending which by default is true; when false, data is
reverse-ranked, with larger values assigned a smaller rank.
Rank supports different tie-breaking methods, specified with the method parameter:
99
17. Pandas – Window Functions Python Pandas
For working on numerical data, Pandas provide few variants like rolling, expanding and
exponentially moving weights for window statistics. Among these are sum, mean,
median, variance, covariance, correlation, etc.
We will now learn how each of these can be applied on DataFrame objects.
.rolling() Function
This function can be applied on a series of data. Specify the window=n argument and
apply the appropriate statistical function on top of it.
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df.rolling(window=3).mean()
A B C D
2000-01-01 NaN NaN NaN NaN
2000-01-02 NaN NaN NaN NaN
2000-01-03 0.434553 -0.667940 -1.051718 -0.826452
2000-01-04 0.628267 -0.047040 -0.287467 -0.161110
2000-01-05 0.398233 0.003517 0.099126 -0.405565
2000-01-06 0.641798 0.656184 -0.322728 0.428015
2000-01-07 0.188403 0.010913 -0.708645 0.160932
2000-01-08 0.188043 -0.253039 -0.818125 -0.108485
2000-01-09 0.682819 -0.606846 -0.178411 -0.404127
2000-01-10 0.688583 0.127786 0.513832 -1.067156
Note: Since the window size is 3, for first two elements there are nulls and from third the
value will be the average of the n, n-1 and n-2 elements. Thus we can also apply various
functions as mentioned above.
100
Python Pandas
.expanding() Function
This function can be applied on a series of data. Specify the min_periods=n argument
and apply the appropriate statistical function on top of it.
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df.expanding(min_periods=3).mean()
A B C D
2000-01-01 NaN NaN NaN NaN
2000-01-02 NaN NaN NaN NaN
2000-01-03 0.434553 -0.667940 -1.051718 -0.826452
2000-01-04 0.743328 -0.198015 -0.852462 -0.262547
2000-01-05 0.614776 -0.205649 -0.583641 -0.303254
2000-01-06 0.538175 -0.005878 -0.687223 -0.199219
2000-01-07 0.505503 -0.108475 -0.790826 -0.081056
2000-01-08 0.454751 -0.223420 -0.671572 -0.230215
2000-01-09 0.586390 -0.206201 -0.517619 -0.267521
2000-01-10 0.560427 -0.037597 -0.399429 -0.376886
.ewm() Function
ewm is applied on a series of data. Specify any of the com, span, halflife argument and
apply the appropriate statistical function on top of it. It assigns the weights exponentially.
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df.ewm(com=0.5).mean()
101
Python Pandas
A B C D
2000-01-01 1.088512 -0.650942 -2.547450 -0.566858
2000-01-02 0.865131 -0.453626 -1.137961 0.058747
2000-01-03 -0.132245 -0.807671 -0.308308 -1.491002
2000-01-04 1.084036 0.555444 -0.272119 0.480111
2000-01-05 0.425682 0.025511 0.239162 -0.153290
2000-01-06 0.245094 0.671373 -0.725025 0.163310
2000-01-07 0.288030 -0.259337 -1.183515 0.473191
2000-01-08 0.162317 -0.771884 -0.285564 -0.692001
2000-01-09 1.147156 -0.302900 0.380851 -0.607976
2000-01-10 0.600216 0.885614 0.569808 -1.110113
Window functions are majorly used in finding the trends within the data graphically by
smoothing the curve. If there is lot of variation in the everyday data and a lot of data
points are available, then taking the samples and plotting is one method and applying the
window computations and plotting the graph on the results is another method. By these
methods, we can smooth the curve or the trend.
102
18. Pandas – Aggregations Python Pandas
Once the rolling, expanding and ewm objects are created, several methods are available
to perform aggregations on data.
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df
r = df.rolling(window=3,min_periods=1)
print r
A B C D
2000-01-01 1.088512 -0.650942 -2.547450 -0.566858
2000-01-02 0.790670 -0.387854 -0.668132 0.267283
2000-01-03 -0.575523 -0.965025 0.060427 -2.179780
2000-01-04 1.669653 1.211759 -0.254695 1.429166
2000-01-05 0.100568 -0.236184 0.491646 -0.466081
2000-01-06 0.155172 0.992975 -1.205134 0.320958
2000-01-07 0.309468 -0.724053 -1.412446 0.627919
2000-01-08 0.099489 -1.028040 0.163206 -1.274331
2000-01-09 1.639500 -0.068443 0.714008 -0.565969
2000-01-10 0.326761 1.479841 0.664282 -1.361169
Rolling [window=3,min_periods=1,center=False,axis=0]
We can aggregate by passing a function to the entire DataFrame, or select a column via
the standard get item method.
103
Python Pandas
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df
r = df.rolling(window=3,min_periods=1)
print r.aggregate(np.sum)
A B C D
2000-01-01 1.088512 -0.650942 -2.547450 -0.566858
2000-01-02 1.879182 -1.038796 -3.215581 -0.299575
2000-01-03 1.303660 -2.003821 -3.155154 -2.479355
2000-01-04 1.884801 -0.141119 -0.862400 -0.483331
2000-01-05 1.194699 0.010551 0.297378 -1.216695
2000-01-06 1.925393 1.968551 -0.968183 1.284044
2000-01-07 0.565208 0.032738 -2.125934 0.482797
2000-01-08 0.564129 -0.759118 -2.454374 -0.325454
2000-01-09 2.048458 -1.820537 -0.535232 -1.212381
2000-01-10 2.065750 0.383357 1.541496 -3.201469
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df
r = df.rolling(window=3,min_periods=1)
print r['A'].aggregate(np.sum)
104
Python Pandas
2000-01-01 1.088512
2000-01-02 1.879182
2000-01-03 1.303660
2000-01-04 1.884801
2000-01-05 1.194699
2000-01-06 1.925393
2000-01-07 0.565208
2000-01-08 0.564129
2000-01-09 2.048458
2000-01-10 2.065750
Freq: D, Name: A, dtype: float64
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df
r = df.rolling(window=3,min_periods=1)
print r[['A','B']].aggregate(np.sum)
A B
2000-01-01 1.088512 -0.650942
2000-01-02 1.879182 -1.038796
2000-01-03 1.303660 -2.003821
2000-01-04 1.884801 -0.141119
2000-01-05 1.194699 0.010551
2000-01-06 1.925393 1.968551
2000-01-07 0.565208 0.032738
2000-01-08 0.564129 -0.759118
2000-01-09 2.048458 -1.820537
2000-01-10 2.065750 0.383357Freq: D, Name: A, dtype: float64
105
Python Pandas
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df
r = df.rolling(window=3,min_periods=1)
print r['A'].aggregate([np.sum,np.mean])
sum mean
2000-01-01 1.088512 1.088512
2000-01-02 1.879182 0.939591
2000-01-03 1.303660 0.434553
2000-01-04 1.884801 0.628267
2000-01-05 1.194699 0.398233
2000-01-06 1.925393 0.641798
2000-01-07 0.565208 0.188403
2000-01-08 0.564129 0.188043
2000-01-09 2.048458 0.682819
2000-01-10 2.065750 0.688583
df = pd.DataFrame(np.random.randn(10, 4),
index=pd.date_range('1/1/2000', periods=10),
columns=['A', 'B', 'C', 'D'])
print df
r = df.rolling(window=3,min_periods=1)
print r[['A','B']].aggregate([np.sum,np.mean])
106
Python Pandas
A B
sum mean sum mean
2000-01-01 1.088512 1.088512 -0.650942 -0.650942
2000-01-02 1.879182 0.939591 -1.038796 -0.519398
2000-01-03 1.303660 0.434553 -2.003821 -0.667940
2000-01-04 1.884801 0.628267 -0.141119 -0.047040
2000-01-05 1.194699 0.398233 0.010551 0.003517
2000-01-06 1.925393 0.641798 1.968551 0.656184
2000-01-07 0.565208 0.188403 0.032738 0.010913
2000-01-08 0.564129 0.188043 -0.759118 -0.253039
2000-01-09 2.048458 0.682819 -1.820537 -0.606846
2000-01-10 2.065750 0.688583 0.383357 0.127786
df = pd.DataFrame(np.random.randn(3, 4),
index=pd.date_range('1/1/2000', periods=3),
columns=['A', 'B', 'C', 'D'])
print df
r = df.rolling(window=3,min_periods=1)
print r.aggregate({'A' : np.sum,'B' : np.mean})
A B C D
2000-01-01 -1.575749 -1.018105 0.317797 0.545081
2000-01-02 -0.164917 -1.361068 0.258240 1.113091
2000-01-03 1.258111 1.037941 -0.047487 0.867371
A B
2000-01-01 -1.575749 -1.018105
2000-01-02 -1.740666 -1.189587
2000-01-03 -0.482555 -0.447078
107
19. Pandas – Missing Data Python Pandas
Missing data is always a problem in real life scenarios. Areas like machine learning and
data mining face severe issues in the accuracy of their model predictions because of poor
quality of data caused by missing values. In these areas, missing value treatment is a
major point of focus to make their models more accurate and valid.
Let us now see how we can handle missing values (say NA or NaN) using Pandas.
print df
Using reindexing, we have created a DataFrame with missing values. In the output, NaN
means Not a Number.
108
Python Pandas
Example 1
import pandas as pd
import numpy as np
print df['one'].isnull()
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool
Example 2
import pandas as pd
import numpy as np
print df['one'].notnull()
109
Python Pandas
a True
b False
c True
d False
e True
f True
g False
h True
Name: one, dtype: bool
Example 1
import pandas as pd
import numpy as np
print df['one'].sum()
2.02357685917
Example 2
import pandas as pd
import numpy as np
df = pd.DataFrame(index=[0,1,2,3,4,5],columns=['one','two'])
print df['one'].sum()
110
Python Pandas
nan
import pandas as pd
import numpy as np
print df
print ("NaN replaced with '0':")
print df.fillna(0)
Here, we are filling with value zero; instead we can also fill with any other value.
111
Python Pandas
Method Action
Example 1
import pandas as pd
import numpy as np
print df.fillna(method='pad')
Example 2
import pandas as pd
import numpy as np
112
Python Pandas
print df.fillna(method='backfill')
Example 1
import pandas as pd
import numpy as np
113
Python Pandas
Example 2
import pandas as pd
import numpy as np
Empty DataFrame
Columns: []
Index: [a, b, c, d, e, f, g, h]
Example 1
import pandas as pd
import numpy as np
df = pd.DataFrame({'one':[10,20,30,40,50,2000],
'two':[1000,0,30,40,50,60]})
print df.replace({1000:10,2000:60})
one two
0 10 10
1 20 0
2 30 30
3 40 40
4 50 50
5 60 60
114
Python Pandas
Example 2
List notation of passing the values:
import pandas as pd
import numpy as np
df = pd.DataFrame({'one':[10,20,30,40,50,2000],
'two':[1000,0,30,40,50,60]})
print df.replace([1000,2000],[10,60])
one two
0 10 10
1 20 0
2 30 30
3 40 40
4 50 50
5 60 60
115
20. Pandas – GroupBy Python Pandas
Any groupby operation involves one of the following operations on the original object.
They are:
Applying a function
In many situations, we split the data into sets and we apply some functionality on each
subset. In the apply functionality, we can perform the following operations:
Let us now create a DataFrame object and perform all the operations on it:
print df
116
Python Pandas
obj.groupby('key')
obj.groupby(['key1','key2'])
obj.groupby(key,axis=1)
Let us now see how the grouping objects can be applied to the DataFrame object:
Example
# import the pandas library
import pandas as pd
print df.groupby('Team')
View Groups
# import the pandas library
import pandas as pd
117
Python Pandas
print df.groupby('Team').groups
Example
Group by with multiple columns:
print df.groupby(['Team','Year']).groups
118
Python Pandas
grouped = df.groupby('Year')
2014
Points Rank Team Year
0 876 1 Riders 2014
2 863 2 Devils 2014
4 741 3 Kings 2014
9 701 4 Royals 2014
2015
Points Rank Team Year
1 789 2 Riders 2015
3 673 3 Devils 2015
119
Python Pandas
2016
Points Rank Team Year
6 756 1 Kings 2016
8 694 2 Riders 2016
2017
Points Rank Team Year
7 788 1 Kings 2017
11 690 2 Riders 2017
By default, the groupby object has the same label name as the group name.
Select a Group
Using the get_group() method, we can select a single group.
grouped = df.groupby('Year')
print grouped.get_group(2014)
120
Python Pandas
Aggregations
An aggregated function returns a single aggregated value for each group. Once the group
by object is created, several aggregation operations can be performed on the grouped
data.
grouped = df.groupby('Year')
print grouped['Points'].agg(np.mean)
Year
2014 795.25
2015 769.50
2016 725.00
2017 739.00
Name: Points, dtype: float64
Another way to see the size of each group is by applying the size() function:
import pandas as pd
import numpy as np
121
Python Pandas
grouped = df.groupby('Team')
print grouped.agg(np.size)
grouped = df.groupby('Team')
print grouped['Points'].agg([np.sum, np.mean, np.std])
122
Python Pandas
Transformations
Transformation on a group or a column returns an object that is indexed the same size of
that is being grouped. Thus, the transform should return a result that is the same size as
that of a group chunk.
grouped = df.groupby('Team')
score = lambda x: (x - x.mean()) / x.std()*10
print grouped.transform(score)
123
Python Pandas
Filtration
Filtration filters the data on a defined criteria and returns the subset of data. The filter()
function is used to filter the data.
import pandas as pd
import numpy as np
In the above filter condition, we are asking to return the teams which have participated
three or more times in IPL.
124
21. Pandas – Merging/Joining Python Pandas
Pandas has full-featured, high performance in-memory join operations idiomatically very
similar to relational databases like SQL.
Pandas provides a single function, merge, as the entry point for all standard database
join operations between DataFrame objects:
on: Columns (names) to join on. Must be found in both the left and right DataFrame
objects.
left_on: Columns from the left DataFrame to use as keys. Can either be column
names or arrays with length equal to the length of the DataFrame.
right_on: Columns from the right DataFrame to use as keys. Can either be column
names or arrays with length equal to the length of the DataFrame.
left_index: If True, use the index (row labels) from the left DataFrame as its join
key(s). In case of a DataFrame with a MultiIndex (hierarchical), the number of
levels must match the number of join keys from the right DataFrame.
how: One of 'left', 'right', 'outer', 'inner'. Defaults to inner. Each method has been
described below.
sort: Sort the result DataFrame by the join keys in lexicographical order. Defaults
to True, setting to False will improve the performance substantially in many cases.
Let us now create two different DataFrames and perform the merging operations on it.
125
Python Pandas
print left
print right
Name id subject_id
0 Alex 1 sub1
1 Amy 2 sub2
2 Allen 3 sub4
3 Alice 4 sub6
4 Ayoung 5 sub5
Name id subject_id
0 Billy 1 sub2
1 Brian 2 sub4
2 Bran 3 sub3
3 Bryce 4 sub6
4 Betty 5 sub5
126
Python Pandas
Here is a summary of the how options and their SQL equivalent names:
127
Python Pandas
Left Join
import pandas as pd
left = pd.DataFrame({
'id':[1,2,3,4,5],
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5']})
right = pd.DataFrame(
{'id':[1,2,3,4,5],
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5']})
print pd.merge(left, right, on='subject_id', how='left')
Right Join
import pandas as pd
left = pd.DataFrame({
'id':[1,2,3,4,5],
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5']})
right = pd.DataFrame(
{'id':[1,2,3,4,5],
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5']})
print pd.merge(left, right, on='subject_id', how='right')
128
Python Pandas
Outer Join
import pandas as pd
left = pd.DataFrame({
'id':[1,2,3,4,5],
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5']})
right = pd.DataFrame(
{'id':[1,2,3,4,5],
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5']})
print pd.merge(left, right, how='outer', on='subject_id')
129
Python Pandas
Inner Join
Joining will be performed on index. Join operation honors the object on which it is called.
So, a.join(b) is not equal to b.join(a).
import pandas as pd
left = pd.DataFrame({
'id':[1,2,3,4,5],
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5']})
right = pd.DataFrame(
{'id':[1,2,3,4,5],
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5']})
print pd.merge(left, right, on='subject_id', how='inner')
130
22. Pandas – Concatenation Python Pandas
Pandas provides various facilities for easily combining together Series, DataFrame, and
Panel objects.
pd.concat(objs,axis=0,join='outer',join_axes=None,
ignore_index=False)
join: {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on other axis(es).
Outer for union and inner for intersection.
ignore_index: boolean, default False. If True, do not use the index values on the
concatenation axis. The resulting axis will be labeled 0, ..., n - 1.
join_axes: This is the list of Index objects. Specific indexes to use for the other
(n-1) axes instead of performing inner/outer set logic.
Concatenating Objects
The concat function does all of the heavy lifting of performing concatenation operations
along an axis. Let us create different objects and do concatenation.
import pandas as pd
one = pd.DataFrame({
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5'],
'Marks_scored':[98,90,87,69,78]},
index=[1,2,3,4,5])
two = pd.DataFrame({
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5'],
'Marks_scored':[89,80,79,97,88]},
index=[1,2,3,4,5])
print pd.concat([one,two])
131
Python Pandas
Suppose we wanted to associate specific keys with each of the pieces of the chopped up
DataFrame. We can do this by using the keys argument:
import pandas as pd
one = pd.DataFrame({
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5'],
'Marks_scored':[98,90,87,69,78]},
index=[1,2,3,4,5])
two = pd.DataFrame({
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5'],
'Marks_scored':[89,80,79,97,88]},
index=[1,2,3,4,5])
print pd.concat([one,two],keys=['x','y'])
x 1 98 Alex sub1
2 90 Amy sub2
3 87 Allen sub4
4 69 Alice sub6
5 78 Ayoung sub5
y 1 89 Billy sub2
132
Python Pandas
2 80 Brian sub4
3 79 Bran sub3
4 97 Bryce sub6
5 88 Betty sub5
If the resultant object has to follow its own indexing, set ignore_index to True.
import pandas as pd
one = pd.DataFrame({
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5'],
'Marks_scored':[98,90,87,69,78]},
index=[1,2,3,4,5])
two = pd.DataFrame({
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5'],
'Marks_scored':[89,80,79,97,88]},
index=[1,2,3,4,5])
print pd.concat([one,two],keys=['x','y'],ignore_index=True)
Observe, the index changes completely and the Keys are also overridden.
133
Python Pandas
If two objects need to be added along axis=1, then the new columns will be appended.
import pandas as pd
one = pd.DataFrame({
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5'],
'Marks_scored':[98,90,87,69,78]},
index=[1,2,3,4,5])
two = pd.DataFrame({
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5'],
'Marks_scored':[89,80,79,97,88]},
index=[1,2,3,4,5])
print pd.concat([one,two],axis=1)
import pandas as pd
one = pd.DataFrame({
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5'],
'Marks_scored':[98,90,87,69,78]},
index=[1,2,3,4,5])
134
Python Pandas
two = pd.DataFrame({
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5'],
'Marks_scored':[89,80,79,97,88]},
index=[1,2,3,4,5])
print one.append(two)
import pandas as pd
one = pd.DataFrame({
'Name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'subject_id':['sub1','sub2','sub4','sub6','sub5'],
'Marks_scored':[98,90,87,69,78]},
index=[1,2,3,4,5])
two = pd.DataFrame({
'Name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'subject_id':['sub2','sub4','sub3','sub6','sub5'],
'Marks_scored':[89,80,79,97,88]},
index=[1,2,3,4,5])
print one.append([two,one,two])
135
Python Pandas
Time Series
Pandas provide a robust tool for working time with Time series data, especially in the
financial sector. While working with time series data, we frequently come across the
following:
Pandas provides a relatively compact and self-contained set of tools for performing the
above tasks.
136
Python Pandas
import pandas as pd
print pd.datetime.now()
2017-05-11 06:10:13.393147
Create a TimeStamp
Time-stamped data is the most basic type of timeseries data that associates values with
points in time. For pandas objects, it means using the points in time. Let’s take an
example:
import pandas as pd
print pd.Timestamp('2017-03-01')
2017-03-01 00:00:00
It is also possible to convert integer or float epoch times. The default unit for these is
nanoseconds (since these are how Timestamps are stored). However, often epochs are
stored in another unit which can be specified. Let’s take another example:
import pandas as pd
print pd.Timestamp(1587687255,unit='s')
2020-04-24 00:14:15
137
Python Pandas
Converting to Timestamps
To convert a Series or list-like object of date-like objects, for example strings, epochs, or
a mixture, you can use the to_datetime function. When passed, this returns a Series
(with the same index), while a list-like is converted to a DatetimeIndex. Take a look at
the following example:
import pandas as pd
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
import pandas as pd
138
23. Pandas – Date Functionality Python Pandas
Extending the Time series, Date functionalities play major role in financial data analysis.
While working with Date data, we will frequently come across the following:
import pandas as pd
print pd.date_range('1/1/2011', periods=5)
bdate_range
bdate_range() stands for business date ranges. Unlike date_range(), it excludes Saturday
and Sunday.
import pandas as pd
print pd.date_range('1/1/2011', periods=5)
139
Python Pandas
Observe, after 3rd March, the date jumps to 6th march excluding 4th and 5th. Just check
your calendar for the days.
import pandas as pd
start = pd.datetime(2011, 1, 1)
end = pd.datetime(2011, 1, 5)
Offset Aliases
A number of string aliases are given to useful common time series frequencies. We will
refer to these aliases as offset aliases.
140
24. Pandas – Timedelta Python Pandas
Timedeltas are differences in times, expressed in difference units, for example, days,
hours, minutes, seconds. They can be both positive and negative.
String
By passing a string literal, we can create a timedelta object.
import pandas as pd
2 days 02:15:30
Integer
By passing an integer value with the unit, an argument creates a Timedelta object.
import pandas as pd
print pd.Timedelta(6,unit='h')
0 days 06:00:00
Data Offsets
Data offsets such as - weeks, days, hours, minutes, seconds, milliseconds, microseconds,
nanoseconds can also be used in construction.
import pandas as pd
print pd.Timedelta(days=2)
2 days 00:00:00
141
Python Pandas
to_timedelta()
Using the top-level pd.to_timedelta, you can convert a scalar, array, list, or series from
a recognized timedelta format/ value into a Timedelta type. It will construct Series if the
input is a Series, a scalar if the input is scalar-like, otherwise will output
a TimedeltaIndex.
import pandas as pd
1 days 06:05:01.000030
Operations
You can operate on Series/ DataFrames and construct timedelta64[ns] Series through
subtraction operations on datetime64[ns] Series, or Timestamps.
Let us now create a DataFrame with Timedelta and datetime objects and perform some
arithmetic operations on it:
import pandas as pd
A B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 days
142
Python Pandas
Addition Operation
import pandas as pd
print df
A B C
0 2012-01-01 0 days 2012-01-01
1 2012-01-02 1 days 2012-01-03
2 2012-01-03 2 days 2012-01-05
Subtraction Operation
import pandas as pd
A B C D
0 2012-01-01 0 days 2012-01-01 2012-01-01
1 2012-01-02 1 days 2012-01-03 2012-01-04
2 2012-01-03 2 days 2012-01-05 2012-01-07
143
25. Pandas – Categorical Data Python Pandas
Often in real-time, data includes the text columns, which are repetitive. Features like
gender, country, and codes are always repetitive. These are the examples for categorical
data.
Categorical variables can take on only a limited, and usually fixed number of possible
values. Besides the fixed length, categorical data might have an order but cannot perform
numerical operation. Categorical are a Pandas data type.
A string variable consisting of only a few different values. Converting such a string
variable to a categorical variable will save some memory.
The lexical order of a variable is not the same as the logical order (“one”, “two”,
“three”). By converting to a categorical and specifying an order on the categories,
sorting and min/max will use the logical order instead of the lexical order.
Object Creation
Categorical object can be created in multiple ways. The different ways have been described
below:
category
By specifying the dtype as "category" in pandas object creation.
import pandas as pd
s = pd.Series(["a","b","c","a"], dtype="category")
print s
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
The number of elements passed to the series object is four, but the categories are only
three. Observe the same in the output Categories.
144
Python Pandas
pd.Categorical
Using the standard pandas Categorical constructor, we can create a category object.
import pandas as pd
[a, b, c, a, b, c]
Categories (3, object): [a, b, c]
import pandas as pd
[a, b, c, a, b, c, NaN]
Categories (3, object): [c, b, a]
Here, the second argument signifies the categories. Thus, any value which is not present
in the categories will be treated as NaN.
import pandas as pd
cat = cat=pd.Categorical(['a','b','c','a','b','c','d'], ['c', 'b', 'a'],ordered=True)
print cat
[a, b, c, a, b, c, NaN]
Categories (3, object): [c < b < a]
Logically, the order means that, a is greater than b and b is greater than c.
145
Python Pandas
Description
Using the .describe() command on the categorical data, we get similar output to
a Series or DataFrame of the type string.
import pandas as pd
import numpy as np
print df["cat"].describe()
cat s
count 3 3
unique 2 2
top c c
freq 2 2
count 3
unique 2
top c
freq 2
Name: cat, dtype: object
import pandas as pd
import numpy as np
146
Python Pandas
import pandas as pd
import numpy as np
False
Renaming Categories
Renaming categories is done by assigning new values to the series.cat.categories
property.
import pandas as pd
s = pd.Series(["a","b","c","a"], dtype="category")
s.cat.categories = ["Group %s" % g for g in s.cat.categories]
print s.cat.categories
Initial categories [a,b,c] are updated by the s.cat.categories property of the object.
import pandas as pd
s = pd.Series(["a","b","c","a"], dtype="category")
s = s.cat.add_categories([4])
print s.cat.categories
147
Python Pandas
Removing Categories
Using the Categorical.remove_categories() method, unwanted categories can be
removed.
import pandas as pd
s = pd.Series(["a","b","c","a"], dtype="category")
print ("Original object:")
print s
Original object:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
After removal:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (2, object): [b, c]
comparing equality (== and !=) to a list-like object (list, Series, array, ...) of the
same length as the categorical data.
all comparisons (==, !=, >, >=, <, and <=) of categorical data to another
categorical Series, when ordered==True and the categories are the same.
148
Python Pandas
import pandas as pd
print cat>cat1
0 False
1 False
2 True
dtype: bool
149
26. Pandas – Visualization Python Pandas
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(10,4),index=pd.date_range('1/1/2000',
periods=10), columns=list('ABCD'))
df.plot()
We can plot one column versus another using the x and y keywords.
150
Python Pandas
Plotting methods allow a handful of plot styles other than the default line plot. These
methods can be provided as the kind keyword argument to plot(). These include:
Bar Plot
Let us now see what a Bar Plot is by creating one. A bar plot can be created in the following
way:
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.rand(10,4),columns=['a','b','c','d')
df.plot.bar()
import pandas as pd
df=pd.DataFrame(np.random.rand(10,4),columns=['a','b','c','d')
df.plot.bar(stacked=True)
151
Python Pandas
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.rand(10,4),columns=['a','b','c','d')
df.plot.barh(stacked=True)
152
Python Pandas
Histograms
Histograms can be plotted using the plot.hist() method. We can specify number of bins.
import pandas as pd
import numpy as np
df=pd.DataFrame({'a':np.random.randn(1000)+1,'b':np.random.randn(1000),'c':
np.random.randn(1000) - 1}, columns=['a', 'b', 'c'])
df.plot.hist(bins=20)
To plot different histograms for each column, use the following code:
import pandas as pd
import numpy as np
df=pd.DataFrame({'a':np.random.randn(1000)+1,'b':np.random.randn(1000),'c':
np.random.randn(1000) - 1}, columns=['a', 'b', 'c'])
df.diff.hist(bins=20)
153
Python Pandas
Box Plots
Boxplot can be drawn calling Series.box.plot() and DataFrame.box.plot(), or
DataFrame.boxplot() to visualize the distribution of values within each column.
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])
df.plot.box()
154
Python Pandas
Area Plot
Area plot can be created using the Series.plot.area() or the DataFrame.plot.area()
methods.
import pandas as pd
import numpy as np
Scatter Plot
Scatter plot can be created using the DataFrame.plot.scatter() method.
import pandas as pd
import numpy as np
155
Python Pandas
Pie Chart
Pie chart can be created using the DataFrame.plot.pie() method.
import pandas as pd
import numpy as np
156
27. Pandas – IO Tools Python Pandas
The Pandas I/O API is a set of top level reader functions accessed like pd.read_csv()
that generally return a Pandas object.
The two workhorse functions for reading text files (or the flat files) are read_csv()
and read_table(). They both use the same parsing code to intelligently convert tabular
data into a DataFrame object:
S.No,Name,Age,City,Salary
1,Tom,28,Toronto,20000
2,Lee,32,HongKong,3000
3,Steven,43,Bay Area,8300
4,Ram,38,Hyderabad,3900
read.csv
read.csv reads data from the csv files and creates a DataFrame object.
import pandas as pd
df=pd.read_csv("temp.csv")
print df
157
Python Pandas
custom index
This specifies a column in the csv file to customize the index using index_col.
import pandas as pd
df=pd.read_csv("temp.csv",index_col=['S.No'])
print df
Converters
dtype of the columns can be passed as a dict.
import pandas as pd
S.No int64
Name object
Age int64
City object
Salary float64
dtype: object
By default, the dtype of the Salary column is int, but the result shows it as float because
we have explicitly casted the type.
158
Python Pandas
header_names
Specify the names of the header using the names argument.
import pandas as pd
a b c d e
0 S.No Name Age City Salary
1 1 Tom 28 Toronto 20000
2 2 Lee 32 HongKong 3000
3 3 Steven 43 Bay Area 8300
4 4 Ram 38 Hyderabad 3900
Observe, the header names are appended with the custom names, but the header in the
file has not been eliminated. Now, we use the header argument to remove that.
If the header is in a row other than the first, pass the row number to header. This will skip
the preceding rows.
import pandas as pd
df=pd.read_csv("temp.csv",names=['a','b','c','d','e'],header=0)
print df
159
Python Pandas
a b c d e
0 S.No Name Age City Salary
1 1 Tom 28 Toronto 20000
2 2 Lee 32 HongKong 3000
3 3 Steven 43 Bay Area 8300
4 4 Ram 38 Hyderabad 3900
skiprows
skiprows skips the number of rows specified.
import pandas as pd
df=pd.read_csv("temp.csv", skiprows=2)
print df
160
28. Pandas – Sparse Data Python Pandas
Sparse objects are “compressed” when any data matching a specific value (NaN / missing
value, though any value can be chosen) is omitted. A special SparseIndex object tracks
where data has been “sparsified”. This will make much more sense in an example. All of
the standard Pandas data structures apply the to_sparse method:
import pandas as pd
import numpy as np
ts = pd.Series(np.random.randn(10))
ts[2:-2] = np.nan
sts = ts.to_sparse()
print sts
0 -0.810497
1 -1.419954
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 0.439240
9 -1.095910
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
161
Python Pandas
Let us now assume you had a large NA DataFrame and execute the following code:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10000, 4))
df.ix[:9998] = np.nan
sdf = df.to_sparse()
print sdf.density
0.0001
Any sparse object can be converted back to the standard dense form by calling to_dense:
import pandas as pd
import numpy as np
ts = pd.Series(np.random.randn(10))
ts[2:-2] = np.nan
sts = ts.to_sparse()
print sts.to_dense()
0 -0.810497
1 -1.419954
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 0.439240
9 -1.095910
dtype: float64
162
Python Pandas
Sparse Dtypes
Sparse data should have the same dtype as its dense representation. Currently, float64,
int64 and booldtypes are supported. Depending on the original dtype, fill_value
default changes:
float64: np.nan
int64: 0
bool: False
import pandas as pd
import numpy as np
s.to_sparse()
print s
0 1.0
1 NaN
2 NaN
dtype: float64
0 1.0
1 NaN
2 NaN
dtype: float64
163
29. Pandas – Caveats & Gotchas Python Pandas
import pandas as pd
In if condition, it is unclear what to do with it. The error is suggestive of whether to use
a None or any of those.
import pandas as pd
if pd.Series([False, True, False]).any():
print("I am any")
I am any
To evaluate single-element pandas objects in a Boolean context, use the method .bool():
import pandas as pd
print pd.Series([True]).bool()
True
164
Python Pandas
Bitwise Boolean
Bitwise Boolean operators like == and != will return a Boolean series, which is almost
always what is required anyways.
import pandas as pd
s = pd.Series(range(5))
print s==4
0 False
1 False
2 False
3 False
4 True
dtype: bool
isin Operation
This returns a Boolean series showing whether each element in the Series is exactly
contained in the passed sequence of values.
import pandas as pd
s = pd.Series(list('abc'))
s = s.isin(['a', 'c', 'e'])
print s
0 True
1 False
2 True
dtype: bool
165
Python Pandas
Reindexing vs ix Gotcha
Many users will find themselves using the ix indexing capabilities as a concise means
of selecting data from a Pandas object:
import pandas as pd
import numpy as np
print df
print df.ix[['b', 'c', 'e']]
This is, of course, completely equivalent in this case to using the reindex method:
import pandas as pd
import numpy as np
print df
print df.reindex(['b', 'c', 'e'])
166
Python Pandas
Some might conclude that ix and reindex are 100% equivalent based on this. This is
true except in the case of integer indexing. For example, the above operation can
alternatively be expressed as:
import pandas as pd
import numpy as np
print df
print df.ix[[1, 2, 4]]
print df.reindex([1, 2, 4])
167
Python Pandas
It is important to remember that reindex is strict label indexing only. This can lead to
some potentially surprising results in pathological cases where an index contains, say,
both integers and strings.
168
30. Pandas – Comparison with SQL Python Pandas
Since many potential Pandas users have some familiarity with SQL, this page is meant to
provide some examples of how various SQL operations can be performed using pandas.
import pandas as pd
url = 'https://raw.github.com/pandas-
dev/pandas/master/pandas/tests/data/tips.csv'
tips=pd.read_csv(url)
print tips.head()
SELECT
In SQL, selection is done using a comma-separated list of columns that you select (or
a * to select all columns):
With Pandas, column selection is done by passing a list of column names to your
DataFrame:
169
Python Pandas
import pandas as pd
url = 'https://raw.github.com/pandas-
dev/pandas/master/pandas/tests/data/tips.csv'
tips=pd.read_csv(url)
print tips[['total_bill', 'tip', 'smoker', 'time']].head(5)
Calling the DataFrame without the list of column names will display all columns (akin to
SQL’s *).
WHERE
Filtering in SQL is done via a WHERE clause.
DataFrames can be filtered in multiple ways; the most intuitive of which is using Boolean
indexing.
tips[tips['time'] == 'Dinner'].head(5)
import pandas as pd
url = 'https://raw.github.com/pandas-
dev/pandas/master/pandas/tests/data/tips.csv'
tips=pd.read_csv(url)
print tips[tips['time'] == 'Dinner'].head(5)
170
Python Pandas
The above statement passes a Series of True/False objects to the DataFrame, returning
all rows with True.
GroupBy
This operation fetches the count of records in each group throughout a dataset. For
instance, a query fetching us the number of tips left by sex:
tips.groupby('sex').size()
import pandas as pd
url = 'https://raw.github.com/pandas-
dev/pandas/master/pandas/tests/data/tips.csv'
tips=pd.read_csv(url)
print tips.groupby('sex').size()
sex
Female 87
Male 157
dtype: int64
171
Python Pandas
Top N rows
SQL returns the top n rows using LIMIT:
tips.head(5)
import pandas as pd
url = 'https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/tips.csv'
tips=pd.read_csv(url)
tips = tips[['smoker', 'day', 'time']].head(5)
print tips
These are the few basic operations we compared are, which we learnt, in the previous
chapters of the Pandas Library.
172