Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Basic Matrix Theory
Basic Matrix Theory
Basic Matrix Theory
Ebook366 pages3 hours

Basic Matrix Theory

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Written as a guide to using matrices as a mathematical tool, this text is geared toward physical and social scientists, engineers, economists, and others who require a model for procedure rather than an exposition of theory. Knowledge of elementary algebra is the only mathematical prerequisite. Detailed numerical examples illustrate the treatment's focus on computational methods.
The first four chapters outline the basic concepts of matrix theory. Topics include the development of the concept of elementary operations and a systematic procedure for simplifying matrices as well as a method for evaluating the determinant of a given square matrix. Subsequent chapters explore important numerical procedures, including the process for approximating characteristic roots and vectors plus direct and iterative methods for inverting matrices and solving systems of equations. Solutions to the problems are included.
LanguageEnglish
Release dateMay 25, 2017
ISBN9780486822624
Basic Matrix Theory

Related to Basic Matrix Theory

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Basic Matrix Theory

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Basic Matrix Theory - Leonard E. Fuller

    Index

    1

    Basic Properties of Matrices

    1.1 Introduction

    One of the most widely used mathematical concepts is that of a system of linear equations in certain unknowns. Such a system arises in many diverse situations and in a variety of subjects. For such a system, a set of values for the unknowns that will satisfy all the equations is desired. In the language of matrices, a system of linear equations can be written in a very simple form. The use of properties of matrices then makes the solution of the system easier to find.

    However, this is not the only reason for studying matrix algebra. The sociologist uses matrices whose elements are zeros or ones in talking about dominance within a group. Closely allied to this application are the matrices arising in the study of communication links between pairs of people. In genetics, the relationship between frequencies of mating types in one generation and those in another can be expressed using matrices. In electrical engineering, network analysis is greatly aided by the use of matrix representations.

    Today the language of matrices is spreading to more and more fields as its usefulness is becoming recognized. The reader can probably already call to mind instances in his own field where matrices are used. It is hoped that many more applications will occur to him after this study of matrix algebra is completed.

    1.2 The Form of a Matrix

    A proper way to begin a discussion of matrices would be to give a definition. However, before doing this, it should be noted that a simple definition cannot begin to convey the concept that is involved. For this reason, a two part discussion will follow the definition. The first part will be concerned with trying to convey the nature of the form of a matrix. The second part of the discussion will be concerned with the properties of what are called matrix addition, matrix multiplication, and scalar multiplication. This will establish the basic algebra of matrices. Consider the following definition.

    Definition 1.1. A matrix is a rectangular array of numbers of some algebraic system.

    What does this mean? It simply means that a matrix is first of all a set of numbers arranged in a pattern that suggests the geometric form of a rectangle. Most of the time this will actually be a square. The algebraic system from which the numbers are chosen will be discussed in more detail later. Some simple examples of matrices are as follows:

    The [] that are used to enclose the array bring out the rectangular form. Sometimes large () are used, whereas other authors prefer double vertical lines instead of the []. Regardless of the notation, the numbers of the array are set apart as an entity by the symbolism. These numbers are often referred to as the elements of the matrix. The numbers in a horizontal line constitute a row of the matrix, those in a vertical line a column. The rows are numbered from the top to the bottom, while the columns are numbered from left to right.

    It is sometimes necessary in a discussion to refer to a matrix that has been given. To avoid having to write it out completely every time, it is customary to label matrices with capital letters A, B, C, etc. as was done in the example above. In case the matrix being referred to is a general one, then its elements are often denoted with the corresponding small letters with numerical subscripts. The next example will illustrate this symbolism. With this notation one knows at once that capital letters refer to matrices and small letters to the elements.

    When it is necessary in a discussion to talk about a general matrix A, it will be assumed that A consists of a set of mn numbers arranged in n rows with m numbers in each row. The individual elements will be denoted as ars where the r denotes the row in which the element belongs, and the s denotes the column. In other words, a23 will be the element in the second row and third column. The double subscript is the address of the element; it tells in which row and in which column the element may be found. This is definite since there can be, in each row, only one element that is also in a given column. Consider then the following examples.

    In these general matrices, notice that the first subscript does denote the row in which the element occurs, whereas the second indicates the column of the entry. When the number of rows and the number of columns are known, the shorter notation A = (ars) is often used. The real significance of this idea will appear several times in this chapter.

    Another quite useful concept connected with the form of matrices is given by the following definition.

    Definition 1.2. The dimension of a matrix A with n rows and m columns is n × m.

    In the numerical examples given before, the dimensions are 2 × 3, 3 × 1, 3 × 2, and 3 × 3, respectively. In the examples of the general matrices, A is of dimension 4 × 3 and B is of dimension 3 × 4. The dimension of a matrix is often referred to as the size of the matrix.

    A special kind of a matrix is one that has only one row or only one column. These are useful enough to have a special name given to them. This designation is indicated in the next definition.

    Definition 1.3. A row vector is a 1 × m matrix. A column vector is an n × 1 matrix.

    Using these concepts, a matrix can be thought of as being composed of a set of row vectors placed one under the other. These can be numbered in order from top to bottom so that, in the double subscript notation, the first number refers to the row vector to which the element belongs. Similarly, a matrix can be considered as a set of column vectors placed side by side. If these are numbered from left to right, the second subscript of the address of each element would then refer to a column vector in this set.

    There are occasions when reference will be made to the row vectors of a matrix or to the column vectors. In this case, the matrix is to be considered as indicated above. Sometimes the term vectors of a matrix will be used. In this case, the reference is to either row vectors or column vectors or both.

    1.3 The Transpose of a Matrix

    Associated with every 1 × w row vector is an m × 1 column vector. This column vector has the same numbers appearing in the same order as in the row vector. The only difference is that they are written vertically for the column vector and horizontally for the row vector. The column vector is referred to as the transpose of the row vector. Also, the row vector is called the transpose of the column vector.

    This concept is readily extensible to matrices. With a given matrix A, one can associate a matrix A′ known as the transpose of A. The column vectors of A′ are the transposes of the corresponding row vectors of A; or, viewed another way, the row vectors of A′ are the transposes of the corresponding column vectors of A. This concept is expressed in the next definition in terms of the addresses of the elements.

    Definition 1.4. If A = (ars), then A′ = (asr).

    This definition says that the elements in A′ are the same as those of A, with reversed interpretation of the subscripts. In other words, the element of A with the address (r, s) in A′ has (s, r) as its address in A.

    The concept of the transpose can be made clearer by referring to the previous examples. The matrix C is made up of two row vectors. The transposes of these two vectors are

    respectively. This means that C′ has these two vectors as its column vectors. In other words,

    Similarly, the transposes of the other matrices are

    For the two general matrices, the transposes are

    In all of these examples notice how the address of the elements of the original matrix are reversed. Of course, the elements whose row and column addresses are the same have their address unchanged in the transpose. Note too, how the column vectors of the transpose are the corresponding row vectors of the original matrix. This also applies to the row vectors of the transpose, for they are the same as the corresponding column vectors of the original matrix.

    It is apparent that if the dimension of A is n × m, then the dimension of A′ is m × n. This is a consequence of the definitions of transpose and dimension. For instance, the dimensions of the transposes of the numerical examples are 3 × 2, 1 × 3, 2 × 3, and 3 × 3, respectively. Similarly, the dimension of A′ is 3 × 4, whereas that of B′ is 4 × 3.

    Another consequence of the definition of transpose is (A′)= A. In other words, the transpose of the transpose of A is A itself. This becomes apparent on considering the row vectors of A. They form the corresponding column vectors of A′. The column vectors of A′ in turn determine the corresponding row vectors of its transpose (A′). But this means that the row vectors of A and (A′)′ are the same so they are the same matrix.

    1.4 Submatrices

    The last topic concerned with the form of a matrix to be considered is based on the next definition.

    Definition 1.5. A submatrix of a matrix A is an array formed by deleting one or more vectors of A.

    The definition does not specify whether the vectors deleted are row vectors or column vectors. It also allows the deletion of a combination of row vectors and column vectors. Some examples of what is meant will illustrate the concept.

    The deletion of the second column vector of F gives the submatrix

    If the third row vector were also deleted, the resulting submatrix would be

    If the first and third row vectors and the first column vector of F were deleted, there would result the row vector [7 9]. It can be easily seen that there are a variety of submatrices that can be formed from a given matrix.

    If the matrix is square, one can draw a diagonal from the upper left corner to the lower right corner. This line would pass through those elements whose row and column subscripts are the same. These elements are known as the diagonal elements. For the matrix F, 2, 7 and 16 are the diagonal elements. One can form submatrices by deleting corresponding row and column vectors. If the original matrix is square, the submatrices formed are also square. More important however, their diagonal elements are diagonal elements of the original matrix. These are called principal submatrices. For the matrix F, the principal submatrices are

    In the first three of these principal submatrices, a single row and column are deleted from the original matrix; in the last three, two rows and columns are deleted. In all six of these matrices, the diagonal elements are diagonal elements of F.

    1.5 The Elements of a Matrix

    In the definition of a matrix it was stated that the elements belong to an algebraic system. In nearly all of the work that follows, the elements will be real numbers; however, on occasion, they may be complex numbers. In either of these cases, the elements will belong to an algebraic system known as a field. These numbers of the algebraic system are often referred to as scalars. In the section of this chapter on partitioning, the elements will be matrices of smaller size. This way of considering a matrix can be very useful. Later chapters will have sections depending upon partitioning of matrices.

    For matrices whose elements belong to the same algebraic system it is possible to define equality of matrices. Notice that the following definition gives this in terms of equality among the elements.

    Definition 1.6. If A and B are both of dimension n × m, and ars = brs for all r and s then A = B.

    This definition says that the matrices must be of the same size and equal element by element. The first requirement is actually implied by the second and is included only for clarity. It should be noted how the definition is made to depend upon the corresponding property of the elements. This will be characteristic of nearly all of the properties of matrices that will be discussed.

    1.6 The Algebra of Real Numbers

    The arrays of numbers that form a matrix are of little use by themselves. One has to be able to manipulate them according to a set of rules. The set of rules in this case consists of the definitions of three operations. As a consequence of these definitions each of the operations has some important properties. These operations will be known as addition of matrices, multiplication of matrices, and scalar multiplication of matrices. It should be obvious that the first two cannot be the familiar operations upon real numbers that bear these names. However, these new operations will be defined in terms of multiplications and additions of the real number elements of the matrices. This is the justification for using the terminology addition and multiplication.

    Before going to these definitions, a review of some topics in the algebra of real numbers will be made. The purpose will be twofold: first, the properties of the operations of addition and multiplication need to be written down for reference; second, the same pattern of discussion will be used for the operations with matrices. It will be found that not all of the properties for real numbers will carry over to matrices because of the definitions of the operations that are used. These points will be brought out later on as they arise.

    Assume, then, that the familiar operations of addition and multiplication of real numbers are known. These are called closed binary operations or just binary operations because the result of operating with two real numbers is a third real number. Addition and multiplication are actually defined only for two real numbers, that is why they are called binary operations. In order to add three numbers, one adds two of them, and then to this sum one adds the third. In order to add four numbers, one adds to the sum of two of them a third number. Then, to this sum the fourth number is added. The same is true for multiplication. If more than four numbers are added or multiplied, the above process is simply continued a step at a time.

    The operation of addition of real numbers has the following properties.

    1. Addition is commutative, that is,

    We can add two real numbers in either order.

    2. Addition is associative, that is,

    If to the sum of two real numbers a third real number is added, then the result is the same as if to the first number the sum of the last two numbers is added.

    3. For addition there is a unique identity element 0, that is,

    The number 0 added to any real number or any real number added to 0 always gives the real number.

    4. For every real number a there exists a unique real number denoted as –a, such that

    This is known as the additive inverse of a. It is also called the negative of a.

    Any system having a binary operation defined with the last three properties is called a group. In case the first property is also satisfied, it is called a commutative group. This means then that the real numbers form a commutative group under the binary operation of addition.

    With respect to the operation of multiplication, the number 0 is usually omitted. It is then noted that multiplication by 0 gives 0, and if a product of two real numbers is 0, at least one of them is 0. The following properties are true for this operation.

    1. Multiplication is commutative, that is,

    We can multiply two real numbers in either order.

    2. Multiplication is associative, that is,

    If the product of two real numbers is multiplied by a third real number then the result is the same as if the first number is multiplied by the product of the last two numbers.

    3. For multiplication there is a unique identity element 1, that is,

    The number 1 multiplied by any real number gives that number.

    4. For every nonzero real number a there exists a unique real number denoted as a–¹ such that

    This element is known as the multiplicative inverse of a. It is also called the reciprocal of a.

    It follows that the real numbers without 0 also

    Enjoying the preview?
    Page 1 of 1