020 Matrix Algibra

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 33

Matrix Algebra – Back

to school
By: S. P. Vasekar,
M.E. (Electrical Power),
Retd. Superintending Engineer,
M.S.E.T.C.L.
Simultaneous Equations and it’s matrix form

3 x1- 2x2 - 3x3 = 15


5x1 + 3x2 + x3 = 32
2x1 + x2 + 2x3 = 9
3 -2 -3 x1 15
5 3 1 X x2 = 32
2 1 2 x3 9
Coefficient Variable Parameter
Matrix Matrix Matrix
Type of matrices

a11 a12 a13


a21 a22 a23 aij Element of ith row
and jth column
a31 a32 a33
Square Matrix 3x3 a11
a21
a11 a12 a13 a31
Row Matrix/Vector Column Matrix/Vector
Type of matrices
Upper triangular matrix

a11 a12 a13


If the element aij of a square matrix are zero for i > j,
0 a22 a23 then the matrix is upper triangular matrix

0 0 a33
Lower triangular matrix

a11 0 0
If the element aij of a square matrix are zero for i < j,
a21 a22 0 then the matrix is lower triangular matrix

a31 a32 a33


Type of matrices
Diagonal Matrix

a11 0 0 If all off-diagonal elements of a square matrix are zero,


then the matrix is a diagonal matrix

0 a22 0 aij = 0 for all i ≠ j


0 0 a33
Unit or Identity matrix
If all diagonal elements of a square matrix are 1 and all
1 0 0 other elements are zero, then the matrix is Unity matrix
denoted by U
0 1 0 aij = 1 for i = j and 0 for i ≠ j
0 0 1
Type of matrices
Null Matrix

0 0 0 If all elements of a matrix are zero, it is a null matrix

0 0 0 aij = 0 for all


0 0 0
Type of matrices

7 3 6 7 2
A = A t =
2 4 5 3 4
6 5

If the rows and columns of an m x n matrix are interchanged,


the resultant n x m matrix is the transpose and is designated
by At
Type of matrices
Symmetric Matrix

If corresponding off-diagonal elements of a square


5 1 9 matrix are equal, the matrix is the symmetric matrix

1 3 8 aij = aji
9 8 2 Note: For symmetric matrix At = A

Skew Symmetric Matrix


If corresponding off-diagonal elements of a square
0 1 9 matrix are equal but opposite in sign and diagonal
elements zero, the matrix is the Skew symmetric
matrix
-1 0 8
Note: For Skew symmetric matrix A = -At
-9 -8 0
Type of matrices
Orthogonal Matrix
If At A = U = A At for a square matrix with real elements,
then A is an orthogonal matrix

1 2 2 1 -2 2 1 2 -2
1/3 2 1 -2 1/3 2 -1 -2 1/3 2 -1 -2
-2 2 -1 2 2 1 2 -2 1

(A) (B) (C)

Which matrix is/are orthogonal


Type of matrices
Conjugate of a matrix

-5+j2 j3 2 -5-j2 -j3 2


A= 3-j5 -2+j2 3+j2 A*= 3+j5 -2-j2 3-j2
-5 -5-j2 -5j -5 -5+j2 5j
If all the elements of a matrix are replaced by their conjugates (replace a+jb by a – jb),
the resultant matrix is conjugate and is designated by A*

If all the elements of A are real, then A = A*. If all elements are pure imaginary,
then A = -A*
Type of matrices
Conjugate of a matrix

-5+j2 j3 2 -5-j2 -j3 2


A= 3-j5 -2+j2 3+j2 A*= 3+j5 -2-j2 3-j2
-5 -5-j2 -5j -5 -5+j2 5j
If all the elements of a matrix are replaced by their conjugates (replace a+jb by a – jb),
the resultant matrix is conjugate and is designated by A*

If all the elements of A are real, then A = A*. If all elements are pure imaginary,
then A = -A*
Type of matrices
Hermitian and Skew-Hermitian matrix

-5 j3 5+j3 -j2 -j3 2


H= -j3 3 3+j2 S= -j3 j2 3-j2
5-j3 3-j2 2 -2 -3-j2 0

If A = (A*)t for a square complex matrix, A is a Hermitian matrix in which all diagonal
elements are real. See above matrix H. Corresponding non-diagonal elements are
conjugates

If A = -(A*)t for a square complex matrix, A is a Skew-Hermitian matrix in which all


diagonal elements are either zero or imaginary. See above matrix S. Corresponding
non-diagonal elements real part sign change.
Type of matrices
Unitary Matrix
If (A*)tA = U = A(A*) for a square complex matrix, A is unitary matrix. A unitary matrix
with real elements is an orthogonal matrix.

Whether any matrix is Unitary

1 -j -1+j 1 -j 1+j
1/2 j 1 1+j 1/2 j 1 -1+j
1+j -1+j 0 1+j -1+j 0

(A) (B)
Matrix operations using Matlab/Octave
Matrix multiplication using Matlab/Octave
Determinant
Consider following simultaneous equations
a11x1 + a11x2 = k1
a21x1 + a22x2 = k2

Solution for the equation is


X1 = (a22k1 – a12k2)/(a12a22 – a12a21)
X2 = (a11k2 – a21k1)/(a12a22 – a12a21)

The expression (a12a22 – a12a21) is the value of determinant of


the coefficient matrix A, where |A| denotes the determinant

a11 a12
|A| = 2 1
= (a12a22 – a12a21)
a21 a22
Determinant – Higher order matrix
1) The determinant obtained by striking out the ith row and jth column is called the
minor of the element aij, Thus for

a11 a12 a13


a12 a13
|A| = a21 a22 a23 The minor of a21 is
a32 a33
a31 a32 a33

2) The co-factor of an element is (-1) i+j (minor of aij) designated by Aij

3) The sum of the products of the elements in any row (or column) and their cofactors
is equal to the determinant.

|A| = a21A21 + a22A22 + a23A23


Adjoint
If each element of a square matrix is replaced by its cofactor and then the matrix is
transposed, the resulting matrix is an Adjoint which is designated by A+

A11 A21 A31

A+ = A12 A22 A32

A13 A23 A33


Note the elements
subscripts
Matrix Operations
Equality of matrices
If A and B are matrices with the same dimension and for each element aij = bij then
the matrices are equal

Addition and subtraction of matrices


Matrices of the same dimensions are conformable for addition and subtraction. When
C = A +/- B then for each element of C, cij = aij +/- bij

The commutative and Associative Law


A+B = B+A Commutative Law
A+B+C = A+(B+C) = (A+B) + C Associative Law

Multiplication of a matrix by a scalar


The elements of the resultant matrix are equal to the product of the original elements
and the scalar.

k(A + B) = kA + kB = (A + B)k Distributive Law


Matrix Operations
Equality of matrices
If A and B are matrices with the same dimension and for each element aij = bij then
the matrices are equal

Addition and subtraction of matrices


Matrices of the same dimensions are conformable for addition and subtraction. When
C = A +/- B then for each element of C, cij = aij +/- bij

The commutative and Associative Law


A+B = B+A Commutative Law
A+B+C = A+(B+C) = (A+B) + C Associative Law

Multiplication of a matrix by a scalar


The elements of the resultant matrix are equal to the product of the original elements
and the scalar.

k(A + B) = kA + kB = (A + B)k Distributive Law


Matrix Operations
Multiplication of Matrices
Multiplication of two matrices AB = C is defined only if the number of columns of the
first matrix A equal to the number of rows of B. Any element Cij of C is the sum of the
products of the corresponding elements of ith row of A and jth column of B that is

𝑞
Cij = ∑ 𝑎𝑖𝑘𝑏𝑘𝑗 𝑖=1,2 , …,𝑚; 𝑗=1,2 ,…,𝑛
𝑖=1

Cmn = AmkBkn
Matrix Multiplication Example
4x4 + 2x3 + 1x2 = 24
4x2 + 2x5 + 1x1 = 19
3x4 + 5x3 + 2x2 = 31
3x2 + 5x5 + 2x1 = 33

4 2 1 4 2 24 19
X =
3 5 2 2x3 3 5 31 33 2x2
2 1 3x2
Matrix Operations
1) AB ≠ BA

2) A(B + C) = AB + AC

3) A(BC) = (AC)B + ABC

4) A(BC) = (AC)B + ABC

5) AB = 0 does not necessarily imply that A = 0 or B = 0

6) CA = CB does not necessarily imply that A = B

7) If C = AB Then Ct = BtAt
Inverse of a matrix
Division does not exists in matrix algebra except division of matrix by a scalar where
each element of the matrix is divided by the scalar. However for the given set of
equations -

a11x1 + a12x2 + a13x3 = y1


a21x1 + a22x2 + a23x3 = y2
a31x1 + a32x2 + a33x3 = y3
AX = Y
when it is desirable to express x1, x2, x3 in terms of y1, y2, y3 that means in matrix form

X = BY = A-1Y
If there is unique solution to equations then matrix B exists and is inverse of A and
is written as A-1
B = A-1 =
Inverse of a matrix
AX = Y
To solve for X from above matrix equation. Both sides of equations are pre-multiplies
by A-1

A-1AX = A-1Y
UX = A-1Y
X = A-1Y
Some other important properties of inverse operation are

(AB)-1 = B-1A-1
(At)-1 = (A-1)t
Partitioning of Matrix
A large matrix can be subdivided into several sub-matrices of smaller dimensions. It
can be used to show specific structure of a matrix and for simplify the calculations

A1 A2

A =
A3 A4

If diagonal sub-matrices A1 and A4 are square, the sub division is called principal partitioning
Partitioning of Matrix
For mathematical operations, each sub-matrices is considered as elements in
partitioned matrices. For addition and subtraction is performed as shown below

A1 A2 B1 B2 A1± B1 A2± B2

A3 ± B = A3± B3 A4± B4
A4 3 B4
Partitioning of Matrix
After partitioning multiplication is performed as below. Note that sub-matrices are
conformable for multiplication.

A1 A2 B1 B2 C1 C2

A3 B3 = C3
A4 B4 C4

C1 = A1B1 + A2B3
Each sub-matrix is considered as an
C2 = A1B2 + A2B4 element in the partitioned matrix
C3 = A3B1 + A4B3
C4 = A3B2 + A4B4
Partitioning of Matrix
Inverse of a partitioned matrix is calculated as below

A1 A2 B1 B2
A= A A-1 = B
3 A4 3 B4

B1 = (A1 – A2A4-1A3)-1 A1 and A4 must be square matrix


B2 = -B1A2A4-1
B3 = -A4-1A3B1
B4 = A4-1 – A4-1A3B2
Linear dependence
Columns of the matrix can be written as
a11 a12 …. a1n n separate column vectors

a11 a12 a1n


a21 a22 …. a2n
a21 a22 a2n
A= …. …. …. ….
C1 = ….
….
C2= ….
….
……Cn= ….
….
…. …. …. …. am1 am2 amn

am1 am2 …. amn

The columns vectors are linearly independent if the equation….

p1C1 + p2C2 + …. + pnCn = 0


is satisfied only for all pk = 0 (k = 1,2,….,n).

Same is true in case of rows too.


In other words, in case of N rows, they are said to be linearly independent if no
row from them can be represented as linear combination of other row or rows.
Linear dependence example

5 -3 2

A= 19 -6 -5

-2 3 -5

Here 5R1 + R2 + 3R3 = 0 hence rows are dependent / not independent

Here 4C1 + 8C2 + 5C3 = 0 hence columns are dependent / not independent

There are many such multiplier triplets for prescribed


range and you can find them all programmatically.
Rank of a matrix
1) The rank of matrix A of order m x n is equal to the maximum
number of linearly independent columns (or rows) of A
2) The column rank is equal to the row rank
3) The rank of a matrix is equal to the order of the largest non-
vanishing determinant in A
4) In the previous slide rank of the matrix is 2.
Linear equations
• m Linear equation in n variables can be represented by matrix
notation
• AX = Y
• The augmented matrix of A, designated by Â, is formed by
adjoining the column vector Y as (n+1)st column to A
• The necessary and sufficient condition for a system of linear
equations to have a solution is that the rank of the coefficient
matrix A be equal to the rank of augmented matrix Â
• The unique solution exists when A is a square matrix and the rank
of A is equal to the number of columns (variables).
• If the rank of A is less than the number of equations, some of the
equations are redundant
• If the rank of A is less than the number of variables of the system,
there are an infinite number of nontrivial solutions.

You might also like