The rank of the identity matrix is ​​equal to. Find the rank of a matrix: methods and examples

Subscribe
Join the “koon.ru” community!
In contact with:

A number r is called the rank of matrix A if:
1) in the matrix A there is a minor of order r, different from zero;
2) all minors of order (r+1) and higher, if they exist, are equal to zero.
Otherwise, the rank of a matrix is ​​the highest minor order other than zero.
Designations: rangA, r A or r.
From the definition it follows that r is an integer positive number. For a null matrix, the rank is considered to be zero.

Purpose of the service. The online calculator is designed to find matrix rank. In this case, the solution is saved in Word and Excel format. see example solution.

Instructions. Select the matrix dimension, click Next.

Select matrix dimension 3 4 5 6 7 x 3 4 5 6 7

Definition . Let a matrix of rank r be given. Any minor of a matrix that is different from zero and has order r is called basic, and the rows and columns of its components are called basic rows and columns.
According to this definition, a matrix A can have several basis minors.

The rank of the identity matrix E is n (the number of rows).

Example 1. Given two matrices, and their minors , . Which of them can be taken as the basic one?
Solution. Minor M 1 =0, so it cannot be a basis for any of the matrices. Minor M 2 =-9≠0 and has order 2, which means it can be taken as the basis of matrices A or / and B, provided that they have ranks equal to 2. Since detB=0 (as a determinant with two proportional columns), then rangB=2 and M 2 can be taken as the basis minor of matrix B. The rank of matrix A is 3, due to the fact that detA=-27≠0 and, therefore, the order the basis minor of this matrix must be equal to 3, that is, M 2 is not a basis for the matrix A. Note that the matrix A has a single basis minor, equal to the determinant of the matrix A.

Theorem (about the basis minor). Any row (column) of a matrix is ​​a linear combination of its basis rows (columns).
Corollaries from the theorem.

  1. Every (r+1) column (row) matrix of rank r is linearly dependent.
  2. If the rank of a matrix is ​​less than the number of its rows (columns), then its rows (columns) are linearly dependent. If rangA is equal to the number of its rows (columns), then the rows (columns) are linearly independent.
  3. The determinant of a matrix A is equal to zero if and only if its rows (columns) are linearly dependent.
  4. If you add another row (column) to a row (column) of a matrix, multiplied by any number other than zero, then the rank of the matrix will not change.
  5. If you cross out a row (column) in a matrix, which is a linear combination of other rows (columns), then the rank of the matrix will not change.
  6. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows (columns).
  7. The maximum number of linearly independent rows is the same as the maximum number of linearly independent columns.

Example 2. Find the rank of a matrix .
Solution. Based on the definition of the matrix rank, we will look for a minor of the highest order, different from zero. First we transform the matrix to more simple view. To do this, multiply the first row of the matrix by (-2) and add it to the second, then multiply it by (-1) and add it to the third.

To work with the concept of matrix rank, we will need information from the topic "Algebraic complements and minors. Types of minors and algebraic complements." First of all, this concerns the term “matrix minor”, ​​since we will determine the rank of the matrix precisely through the minors.

Matrix rank is the maximum order of its minors, among which there is at least one that is not equal to zero.

Equivalent matrices- matrices whose ranks are equal to each other.

Let us explain in more detail. Suppose that among the second-order minors there is at least one that is different from zero. And all minors whose order is higher than two are equal to zero. Conclusion: the rank of the matrix is ​​2. Or, for example, among the minors of the tenth order there is at least one that is not equal to zero. And all minors whose order is higher than 10 are equal to zero. Conclusion: the rank of the matrix is ​​10.

The rank of the matrix $A$ is denoted as follows: $\rang A$ or $r(A)$. The rank of the zero matrix $O$ is assumed to be zero, $\rang O=0$. Let me remind you that to form a matrix minor you need to cross out rows and columns, but it is impossible to cross out more rows and columns than the matrix itself contains. For example, if the matrix $F$ has size $5\times 4$ (i.e. contains 5 rows and 4 columns), then the maximum order of its minors is four. It will no longer be possible to form minors of the fifth order, since they will require 5 columns (and we have only 4). This means that the rank of the matrix $F$ cannot be more than four, i.e. $\rang F≤4$.

In more general form, the above means that if a matrix contains $m$ rows and $n$ columns, then its rank cannot exceed the smallest of $m$ and $n$, i.e. $\rang A≤\min(m,n)$.

In principle, from the very definition of rank follows the method for finding it. The process of finding the rank of a matrix, by definition, can be schematically represented as follows:

Let me explain this diagram in more detail. Let's start reasoning from the very beginning, i.e. from the first order minors of some matrix $A$.

  1. If all first-order minors (i.e., elements of the matrix $A$) are equal to zero, then $\rang A=0$. If among the first-order minors there is at least one that is not equal to zero, then $\rang A≥ 1$. Let's move on to checking second-order minors.
  2. If all second-order minors are equal to zero, then $\rang A=1$. If among the second-order minors there is at least one that is not equal to zero, then $\rang A≥ 2$. Let's move on to checking third-order minors.
  3. If all third-order minors are equal to zero, then $\rang A=2$. If among the third-order minors there is at least one that is not equal to zero, then $\rang A≥ 3$. Let's move on to checking fourth-order minors.
  4. If all fourth-order minors are equal to zero, then $\rang A=3$. If among the fourth-order minors there is at least one that is not equal to zero, then $\rang A≥ 4$. We move on to checking fifth-order minors and so on.

What awaits us at the end of this procedure? It is possible that among the kth order minors there will be at least one that is different from zero, and all (k+1) order minors will be equal to zero. This means that k is the maximum order of minors, among which there is at least one that is not equal to zero, i.e. the rank will be equal to k. There may be a different situation: among the kth order minors there will be at least one that is not equal to zero, but it will no longer be possible to form (k+1) order minors. In this case, the rank of the matrix is ​​also equal to k. In short, the order of the last composed non-zero minor will be equal to the rank of the matrix.

Let's move on to examples in which the process of finding the rank of a matrix, by definition, will be clearly illustrated. Let me emphasize once again that in the examples of this topic we will begin to find the rank of matrices using only the definition of rank. Other methods (calculating the rank of a matrix using the method of bordering minors, calculating the rank of a matrix using the method of elementary transformations) are discussed in the following topics.

By the way, it is not at all necessary to start the procedure for finding the rank with minors of the smallest order, as was done in examples No. 1 and No. 2. You can immediately move on to minors of higher orders (see example No. 3).

Example No. 1

Find the rank of the matrix $A=\left(\begin(array)(ccccc) 5 & 0 & -3 & 0 & 2 \\ 7 & 0 & -4 & 0 & 3 \\ 2 & 0 & -1 & 0 & 1 \end(array) \right)$.

This matrix has size $3\times 5$, i.e. contains three rows and five columns. Of the numbers 3 and 5, the minimum is 3, therefore the rank of the matrix $A$ is no more than 3, i.e. $\rang A≤ 3$. And this inequality is obvious, since we will no longer be able to form fourth-order minors - they require 4 rows, and we have only 3. Let’s move on directly to the process of finding the rank of a given matrix.

Among the first order minors (i.e. among the elements of the matrix $A$) there are non-zero ones. For example, 5, -3, 2, 7. In general, we are not interested in the total number of non-zero elements. There is at least one non-zero element - and that's enough. Since among the first-order minors there is at least one non-zero, we conclude that $\rang A≥ 1$ and proceed to checking the second-order minors.

Let's start exploring second order minors. For example, at the intersection of rows No. 1, No. 2 and columns No. 1, No. 4 there are elements of the following minor: $\left|\begin(array)(cc) 5 & 0 \\ 7 & 0 \end(array) \right| $. For this determinant, all elements of the second column are equal to zero, therefore the determinant itself is equal to zero, i.e. $\left|\begin(array)(cc) 5 & 0 \\ 7 & 0 \end(array) \right|=0$ (see property No. 3 in the topic of properties of determinants). Or you can simply calculate this determinant using formula No. 1 from the section on calculating second- and third-order determinants:

$$ \left|\begin(array)(cc) 5 & 0 \\ 7 & 0 \end(array) \right|=5\cdot 0-0\cdot 7=0. $$

The first second-order minor we tested turned out to be equal to zero. What does this mean? About the need to further check second-order minors. Either they will all turn out to be zero (and then the rank will be equal to 1), or among them there will be at least one minor that is different from zero. Let's try to make a better choice by writing a second-order minor, the elements of which are located at the intersection of rows No. 1, No. 2 and columns No. 1 and No. 5: $\left|\begin(array)(cc) 5 & 2 \\ 7 & 3 \end(array) \right|$. Let's find the value of this second-order minor:

$$ \left|\begin(array)(cc) 5 & 2 \\ 7 & 3 \end(array) \right|=5\cdot 3-2\cdot 7=1. $$

This minor is not equal to zero. Conclusion: among the second-order minors there is at least one non-zero. Therefore $\rang A≥ 2$. We need to move on to studying third-order minors.

If we choose column No. 2 or column No. 4 to form third-order minors, then such minors will be equal to zero (since they will contain a zero column). It remains to check only one third-order minor, the elements of which are located at the intersection of columns No. 1, No. 3, No. 5 and rows No. 1, No. 2, No. 3. Let's write down this minor and find its value:

$$ \left|\begin(array)(ccc) 5 & -3 & 2 \\ 7 & -4 & 3 \\ 2 & -1 & 1 \end(array) \right|=-20-18-14 +16+21+15=0. $$

So, all third order minors are equal to zero. The last non-zero minor we compiled was of second order. Conclusion: the maximum order of minors, among which there is at least one non-zero, is 2. Therefore, $\rang A=2$.

Answer: $\rang A=2$.

Example No. 2

Find the rank of the matrix $A=\left(\begin(array) (cccc) -1 & 3 & 2 & -3\\ 4 & -2 & 5 & 1\\ -5 & 0 & -4 & 0\\ 9 & 7 & 8 & -7 \end(array) \right)$.

We have a square matrix of the fourth order. Let us immediately note that the rank of this matrix does not exceed 4, i.e. $\rang A≤ 4$. Let's start finding the rank of the matrix.

Among the first-order minors (i.e., among the elements of the matrix $A$) there is at least one that is not equal to zero, therefore $\rang A≥ 1$. Let's move on to checking second-order minors. For example, at the intersection of rows No. 2, No. 3 and columns No. 1 and No. 2, we obtain the following second-order minor: $\left| \begin(array) (cc) 4 & -2 \\ -5 & 0 \end(array) \right|$. Let's calculate it:

$$\left| \begin(array) (cc) 4 & -2 \\ -5 & 0 \end(array) \right|=0-10=-10. $$

Among the second-order minors there is at least one that is not equal to zero, so $\rang A≥ 2$.

Let's move on to third-order minors. Let's find, for example, a minor whose elements are located at the intersection of rows No. 1, No. 3, No. 4 and columns No. 1, No. 2, No. 4:

$$\left | \begin(array) (cccc) -1 & 3 & -3\\ -5 & 0 & 0\\ 9 & 7 & -7 \end(array) \right|=105-105=0. $$

Since this third-order minor turned out to be equal to zero, it is necessary to investigate another third-order minor. Either all of them will be equal to zero (then the rank will be equal to 2), or among them there will be at least one that is not equal to zero (then we will begin to study fourth-order minors). Let's consider a third-order minor, the elements of which are located at the intersection of rows No. 2, No. 3, No. 4 and columns No. 2, No. 3, No. 4:

$$\left| \begin(array) (ccc) -2 & 5 & 1\\ 0 & -4 & 0\\ 7 & 8 & -7 \end(array) \right|=-28. $$

Among the third-order minors there is at least one non-zero, so $\rang A≥ 3$. Let's move on to checking fourth-order minors.

Any fourth-order minor is located at the intersection of four rows and four columns of the matrix $A$. In other words, the fourth-order minor is the determinant of the matrix $A$, since this matrix contains 4 rows and 4 columns. The determinant of this matrix was calculated in example No. 2 of the topic “Reducing the order of the determinant. Decomposing the determinant in a row (column)”, so let’s just take the finished result:

$$\left| \begin(array) (cccc) -1 & 3 & 2 & -3\\ 4 & -2 & 5 & 1\\ -5 & 0 & -4 & 0\\ 9 & 7 & 8 & -7 \end (array)\right|=86. $$

So the fourth order minor is not equal to zero. We can no longer form minors of the fifth order. Conclusion: the highest order of minors, among which there is at least one non-zero, is 4. Result: $\rang A=4$.

Answer: $\rang A=4$.

Example No. 3

Find the rank of the matrix $A=\left(\begin(array) (cccc) -1 & 0 & 2 & -3\\ 4 & -2 & 5 & 1\\ 7 & -4 & 0 & -5 \end( array) \right)$.

Let us immediately note that this matrix contains 3 rows and 4 columns, so $\rang A≤ 3$. In the previous examples, we began the process of finding the rank by considering minors of the smallest (first) order. Here we will try to immediately check the minors of the highest possible order. For the matrix $A$ these are the third order minors. Let's consider a third-order minor, the elements of which lie at the intersection of rows No. 1, No. 2, No. 3 and columns No. 2, No. 3, No. 4:

$$\left| \begin(array) (ccc) 0 & 2 & -3\\ -2 & 5 & 1\\ -4 & 0 & -5 \end(array) \right|=-8-60-20=-88. $$

So, the highest order of minors, among which there is at least one that is not equal to zero, is 3. Therefore, the rank of the matrix is ​​3, i.e. $\rang A=3$.

Answer: $\rang A=3$.

In general, finding the rank of a matrix by definition is, in the general case, a rather labor-intensive task. For example, a relatively small matrix of size $5\times 4$ has 60 second-order minors. And even if 59 of them are equal to zero, then the 60th minor may turn out to be non-zero. Then you will have to study third-order minors, of which this matrix has 40 pieces. Usually they try to use less cumbersome methods, such as the method of bordering minors or the method of equivalent transformations.

Definition. Matrix rank is the maximum number of linearly independent rows considered as vectors.

Theorem 1 on the rank of the matrix. Matrix rank is called the maximum order of a nonzero minor of a matrix.

We already discussed the concept of a minor in the lesson on determinants, and now we will generalize it. Let's take a certain number of rows and a certain number of columns in the matrix, and this “how much” should be less than the number of rows and columns of the matrix, and for rows and columns this “how many” should be the same number. Then at the intersection of how many rows and how many columns there will be a matrix of lower order than our original matrix. The determinant is a matrix and will be a minor of the kth order if the mentioned “some” (the number of rows and columns) is denoted by k.

Definition. Minor ( r+1)th order, within which the chosen minor lies r-th order is called bordering for a given minor.

The two most commonly used methods are finding the rank of the matrix. This way of bordering minors And method of elementary transformations(Gauss method).

When using the bordering minors method, the following theorem is used.

Theorem 2 on the rank of the matrix. If a minor can be composed from matrix elements r th order, not equal to zero, then the rank of the matrix is ​​equal to r.

When using the elementary transformation method, the following property is used:

If, through elementary transformations, a trapezoidal matrix is ​​obtained that is equivalent to the original one, then rank of this matrix is the number of lines in it other than lines consisting entirely of zeros.

Finding the rank of a matrix using the method of bordering minors

The bordering minor is called the minor higher order in relation to a given one, if this minororm of a higher order contains the given minor.

For example, given the matrix

Let's take a minor

The bordering minors will be:

Algorithm for finding the rank of a matrix next.

1. Find minors of the second order that are not equal to zero. If all second-order minors are equal to zero, then the rank of the matrix will be equal to one (r =1 ).

2. If there is at least one minor of the second order that is not equal to zero, then we compose the bordering minors of the third order. If all bordering minors of the third order are equal to zero, then the rank of the matrix is ​​equal to two ( r =2 ).

3. If at least one of the bordering minors of the third order is not equal to zero, then we compose the bordering minors. If all the bordering minors of the fourth order are equal to zero, then the rank of the matrix is ​​equal to three ( r =2 ).

4. Continue this way as long as the matrix size allows.

Example 1. Find the rank of a matrix

.

Solution. Minor of the second order .

Let's border it. There will be four bordering minors:

,

,

Thus, all bordering minors of the third order are equal to zero, therefore, the rank of this matrix is ​​equal to two ( r =2 ).

Example 2. Find the rank of a matrix

Solution. The rank of this matrix is ​​equal to 1, since all the second-order minors of this matrix are equal to zero (in this, as in the cases of bordering minors in the two following examples, dear students are invited to verify for themselves, perhaps using the rules for calculating determinants), and among the first-order minors , that is, among the elements of the matrix, there are non-zero ones.

Example 3. Find the rank of a matrix

Solution. The second order minor of this matrix is ​​, and all third order minors of this matrix are equal to zero. Therefore, the rank of this matrix is ​​two.

Example 4. Find the rank of a matrix

Solution. The rank of this matrix is ​​3, since the only third-order minor of this matrix is ​​3.

Finding the rank of a matrix using the method of elementary transformations (Gauss method)

Already in example 1 it is clear that the task of determining the rank of a matrix using the method of bordering minors requires calculating large number determinants. There is, however, a way to reduce the amount of computation to a minimum. This method is based on the use of elementary matrix transformations and is also called the Gauss method.

The following operations are understood as elementary matrix transformations:

1) multiplying any row or column of a matrix by a number other than zero;

2) adding to the elements of any row or column of the matrix the corresponding elements of another row or column, multiplied by the same number;

3) swapping two rows or columns of the matrix;

4) removing “null” rows, that is, those whose elements are all equal to zero;

5) deleting all proportional lines except one.

Theorem. During an elementary transformation, the rank of the matrix does not change. In other words, if we use elementary transformations from the matrix A went to the matrix B, That .

We will also consider an important practical application of the topic: system research linear equations for jointness.

What is the rank of a matrix?

The humorous epigraph of the article contains large share truth. We usually associate the word “rank” with some kind of hierarchy, most often with a career ladder. The more knowledge, experience, abilities, connections, etc. a person has. – the higher his position and range of opportunities. In youth terms, rank refers to the general degree of “steepness.”

And our mathematical brothers live by the same principles. Let's take a few random ones for a walk zero matrices:

Let's think about it, if in the matrix all zeros, then what rank can we talk about? Everyone is familiar with the informal expression “total zero”. In the society of matrices everything is exactly the same:

Rank of the zero matrixany size equals zero.

Note : The zero matrix is ​​denoted by the Greek letter "theta"

In order to better understand the rank of the matrix, hereinafter I will use materials to help analytical geometry. Consider zero vector our three-dimensional space, which does not set a specific direction and is useless for building affine basis. From an algebraic point of view, the coordinates given vector recorded in matrix“one by three” and logical (in the indicated geometric sense) assume that the rank of this matrix is ​​zero.

Now let's look at a few non-zero column vectors And row vectors:


Each instance has at least one non-zero element, and that's something!

The rank of any non-zero row vector (column vector) is equal to one

And generally speaking - if in the matrix arbitrary sizes there is at least one non-zero element, then its rank not less units.

Algebraic row vectors and column vectors are to a certain extent abstract, so let's turn again to the geometric association. Non-zero vector sets a very definite direction in space and is suitable for constructing basis, therefore the rank of the matrix will be considered equal to one.

Theoretical information : in linear algebra, a vector is an element of a vector space (defined through 8 axioms), which, in particular, can represent an ordered row (or column) of real numbers with the operations of addition and multiplication by a real number defined for them. With more detailed information about vectors can be found in the article Linear transformations.

linearly dependent(expressed through each other). From a geometric point of view, the second line contains the coordinates of the collinear vector , which did not advance the matter at all in building three-dimensional basis, being in this sense superfluous. Thus, the rank of this matrix is ​​also equal to one.

Let's rewrite the coordinates of the vectors into columns ( transpose the matrix):

What has changed in terms of rank? Nothing. The columns are proportional, which means the rank is equal to one. By the way, note that all three lines are also proportional. They can be identified with the coordinates three collinear vectors of the plane, of which only one useful for constructing a "flat" basis. And this is completely consistent with our geometric sense rank.

An important statement follows from the above example:

The rank of the matrix in rows is equal to the rank of the matrix in columns. I already mentioned this a little in the lesson about effective methods for calculating the determinant.

Note : linear dependence of rows implies linear dependence of columns (and vice versa). But in order to save time, and out of habit, I will almost always talk about linear dependence of strings.

Let's continue training our beloved pet. Let's add the coordinates of another collinear vector to the matrix in the third row :

Did he help us in constructing a three-dimensional basis? Of course not. All three vectors walk back and forth along the same path, and the rank of the matrix is ​​equal to one. You can take as many collinear vectors as you like, say, 100, put their coordinates into a “one hundred by three” matrix, and the rank of such a skyscraper will still remain one.

Let's get acquainted with the matrix, the rows of which linearly independent. A pair of non-collinear vectors is suitable for constructing a three-dimensional basis. The rank of this matrix is ​​two.

What is the rank of the matrix? The lines don’t seem to be proportional... so, in theory, they are three. However, the rank of this matrix is ​​also two. I added the first two lines and wrote the result at the bottom, i.e. linearly expressed the third line through the first two. Geometrically, the rows of the matrix correspond to the coordinates of three coplanar vectors, and among this three there are a pair of non-collinear comrades.

As you can see, linear dependence in the considered matrix is ​​not obvious, and today we will learn how to bring it out into the open.

I think many people can guess what the rank of a matrix is!

Consider a matrix whose rows linearly independent. Vectors form affine basis, and the rank of this matrix is ​​three.

As you know, any fourth, fifth, tenth vector of three-dimensional space will be linearly expressed in terms of basis vectors. Therefore, if you add any number of rows to a matrix, then its rank will still be equal to three.

Similar reasoning can be carried out for matrices of larger sizes (of course, without any geometric meaning).

Definition : the rank of a matrix is maximum amount linearly independent rows. Or: The rank of a matrix is ​​the maximum number of linearly independent columns. Yes, their number is always the same.

An important practical guideline also follows from the above: the rank of the matrix does not exceed its minimum dimension. For example, in the matrix four rows and five columns. The minimum dimension is four, therefore, the rank of this matrix certainly will not exceed 4.

Designations: in world theory and practice there is no generally accepted standard for designating the rank of a matrix; most often you can find: - as they say, an Englishman writes one thing, a German another. Therefore, based on the famous joke about American and Russian hell, let’s denote the rank of the matrix with a native word. For example: . And if the matrix is ​​“unnamed”, of which there are many, then you can simply write .

How to find the rank of a matrix using minors?

If my grandmother had a fifth column in her matrix, then she would have to calculate another minor of the 4th order (“blue”, “raspberry” + 5th column).

Conclusion: the maximum order of a non-zero minor is three, which means .

Perhaps not everyone has fully comprehended this phrase: a minor of the 4th order is equal to zero, but among the minors of the 3rd order there was a non-zero one - therefore the maximum order non-zero minor and equals three.

The question arises, why not immediately calculate the determinant? Well, firstly, in most tasks the matrix is ​​not square, and secondly, even if you get a non-zero value, the task will most likely be rejected, since it usually involves a standard “bottom-up” solution. And in the example considered, the zero determinant of the 4th order allows us to state that the rank of the matrix is ​​only less than four.

I must admit, I came up with the problem I analyzed myself in order to better explain the method of bordering minors. In real practice, everything is simpler:

Example 2

Find the rank of a matrix using the edge minors method

The solution and answer are at the end of the lesson.

When does the algorithm work fastest? Let's return to the same four-by-four matrix. . Obviously, the solution will be the shortest in the case of “good” corner minors:

And, if , then , otherwise – .

The thinking is not at all hypothetical - there are many examples where the whole matter is limited only to angular minors.

However, in some cases another method is more effective and preferable:

How to find the rank of a matrix using the Gaussian method?

The paragraph is intended for readers who are already familiar with Gaussian method and more or less got their hands on him.

From a technical point of view, the method is not novel:

1) using elementary transformations, we reduce the matrix to a stepwise form;

2) the rank of the matrix is ​​equal to the number of rows.

It is absolutely clear that using the Gaussian method does not change the rank of the matrix, and the essence here is extremely simple: according to the algorithm, during elementary transformations, all unnecessary proportional (linearly dependent) rows are identified and removed, resulting in a “dry residue” - the maximum number of linearly independent rows.

Let's transform the old familiar matrix with the coordinates of three collinear vectors:

(1) The first line was added to the second line, multiplied by –2. The first line was added to the third line.

(2) Zero lines are removed.

Thus, there is one line left, hence . Needless to say, this is much faster than calculating nine zero minors of the 2nd order and only then drawing a conclusion.

I remind you that in itself algebraic matrix nothing can be changed, and transformations are performed only for the purpose of determining the rank! By the way, let’s dwell once again on the question, why not? Source matrix carries information that is fundamentally different from the information of the matrix and row. In some mathematical models(no exaggeration) the difference in one number can be a matter of life and death. ...Remembered school teachers mathematicians of primary and secondary classes who mercilessly cut the grade by 1-2 points for the slightest inaccuracy or deviation from the algorithm. And it was terribly disappointing when, instead of a seemingly guaranteed “A”, it turned out “good” or even worse. Understanding came much later - how else to entrust satellites, nuclear warheads and power plants to a person? But don't worry, I don't work in these areas =)

Let's move on to more meaningful tasks, where, among other things, we will get acquainted with important computational techniques Gauss method:

Example 3

Find the rank of a matrix using elementary transformations

Solution: a “four by five” matrix is ​​given, which means that its rank is certainly no more than 4.

In the first column, there is no 1 or –1, therefore, additional actions are required to obtain at least one unit. Throughout the existence of the site, I have been repeatedly asked the question: “Is it possible to rearrange columns during elementary transformations?” Here, we rearranged the first and second columns, and everything is fine! In most tasks where it is used Gaussian method, the columns can indeed be rearranged. BUT NOT NEEDED. And the point is not even in the possible confusion with variables, the point is that in the classical course of study higher mathematics this action is not traditionally considered, so such a curtsy will be looked at VERY crookedly (or even forced to redo everything).

The second point concerns numbers. As you make your decision, it is helpful to use the following rule of thumb: elementary transformations should, if possible, reduce the matrix numbers. After all, it is much easier to work with one, two, three than, for example, with 23, 45 and 97. And the first action is aimed not only at obtaining a one in the first column, but also at eliminating the numbers 7 and 11.

First the complete solution, then comments:

(1) The first line was added to the second line, multiplied by –2. The first line was added to the third line, multiplied by –3. And to the heap: the 1st line was added to the 4th line, multiplied by –1.

(2) The last three lines are proportional. The 3rd and 4th lines were removed, the second line was moved to the first place.

(3) The first line was added to the second line, multiplied by –3.

The matrix reduced to echelon form has two rows.

Answer:

Now it's your turn to torture the four-by-four matrix:

Example 4

Find the rank of a matrix using the Gaussian method

I remind you that Gaussian method does not imply unambiguous rigidity, and your decision will most likely differ from my decision. A brief example of a task at the end of the lesson.

Which method should I use to find the rank of a matrix?

In practice, it is often not stated at all which method should be used to find the rank. In such a situation, the condition should be analyzed - for some matrices it is more rational to solve through minors, while for others it is much more profitable to apply elementary transformations:

Example 5

Find the rank of a matrix

Solution: the first method somehow immediately disappears =)

A little higher, I advised not to touch the columns of the matrix, but when there is a zero column, or proportional/coinciding columns, then it is still worth amputating:

(1) The fifth column is zero, remove it from the matrix. Thus, the rank of the matrix is ​​no more than four. The first line was multiplied by –1. This is another signature feature of the Gauss method, which turns the following action into a pleasant walk:

(2) To all lines, starting from the second, the first line was added.

(3) The first line was multiplied by –1, the third line was divided by 2, the fourth line was divided by 3. The second line was added to the fifth line, multiplied by –1.

(4) The third line was added to the fifth line, multiplied by –2.

(5) The last two lines are proportional, the fifth is deleted.

The result is 4 lines.

Answer:

Standard five-story building for independent study:

Example 6

Find the rank of a matrix

A short solution and answer at the end of the lesson.

It should be noted that the phrase “matrix rank” is not so often seen in practice, and in most problems you can do without it altogether. But there is one task where the concept in question is the main one actor, and to conclude the article we will look at this practical application:

How to study a system of linear equations for consistency?

Often, in addition to the solution systems of linear equations according to the condition, it is first required to examine it for compatibility, that is, to prove that any solution exists at all. A key role in such verification is played by Kronecker-Capelli theorem, which I will formulate in required form:

If rank system matrices equal to rank extended matrix system, then the system is consistent, and if this number coincides with the number of unknowns, then the solution is unique.

Thus, to study the system for compatibility it is necessary to check the equality , Where - system matrix(remember the terminology from the lesson Gauss method), A - extended system matrix(i.e. a matrix with coefficients of variables + a column of free terms).


Let A be a matrix of sizes m\times n and k be natural number, not exceeding m and n: k\leqslant\min\(m;n\). Minor kth order matrix A is the determinant of a k-th order matrix formed by the elements at the intersection of arbitrarily chosen k rows and k columns of the matrix A. When denoting minors, we will indicate the numbers of the selected rows as upper indices, and the numbers of the selected columns as lower indices, arranging them in ascending order.


Example 3.4. Write minors of different orders of the matrix


A=\begin(pmatrix)1&2&1&0\\ 0&2&2&3\\ 1&4&3&3\end(pmatrix)\!.


Solution. Matrix A has dimensions 3\times4 . It has: 12 minors of the 1st order, for example, minor M_(()_2)^(()_3)=\det(a_(32))=4; 18 2nd order minors, for example, M_(()_(23))^(()^(12))=\begin(vmatrix)2&1\\2&2\end(vmatrix)=2; 4 3rd order minors, for example,


M_(()_(134))^(()^(123))= \begin(vmatrix)1&1&0\\0&2&3\\ 1&3&3 \end(vmatrix)=0.

In a matrix A of dimensions m\times n, the r-th order minor is called basic, if it is non-zero and all minors of (r+1)-ro order are equal to zero or do not exist at all.


Matrix rank is called the order of the basis minor. There is no basis minor in a zero matrix. Therefore, the rank of a zero matrix is, by definition, equal to zero. The rank of matrix A is denoted by \operatorname(rg)A.


Example 3.5. Find all basis minors and matrix rank


A=\begin(pmatrix)1&2&2&0\\0&2&2&3\\0&0&0&0\end(pmatrix)\!.


Solution. All third-order minors of this matrix are equal to zero, since these determinants have a zero third row. Therefore, only a second-order minor located in the first two rows of the matrix can be basic. Going through 6 possible minors, we select non-zero


M_(()_(12))^(()^(12))= M_(()_(13))^(()^(12))= \begin(vmatrix)1&2\\0&2 \end( vmatrix)\!,\quad M_(()_(24))^(()^(12))= M_(()_(34))^(()^(12))= \begin(vmatrix) 2&0\\2&3\end(vmatrix)\!,\quad M_(()_(14))^(()^(12))= \begin(vmatrix)1&0\\0&3\end(vmatrix)\!.


Each of these five minors is a basic one. Therefore, the rank of the matrix is ​​2.

Notes 3.2


1. If all the kth order minors in a matrix are equal to zero, then the higher order minors are also equal to zero. Indeed, expanding the minor of (k+1)-ro order over any row, we obtain the sum of the products of the elements of this row by minors of the kth order, and they are equal to zero.


2. The rank of a matrix is ​​equal to the highest order of the nonzero minor of this matrix.


3. If square matrix non-degenerate, then its rank is equal to its order. If a square matrix is ​​singular, then its rank is less than its order.


4. Designations are also used for rank \operatorname(Rg)A,~ \operatorname(rang)A,~ \operatorname(rank)A.


5. Block matrix rank is defined as the rank of a regular (numeric) matrix, i.e. regardless of its block structure. In this case, the rank of a block matrix is ​​not less than the ranks of its blocks: \operatorname(rg)(A\mid B)\geqslant\operatorname(rg)A And \operatorname(rg)(A\mid B)\geqslant\operatorname(rg)B, since all minors of the matrix A (or B ) are also minors of the block matrix (A\mid B) .

Theorems on the basis minor and the rank of the matrix

Let us consider the main theorems expressing the properties of linear dependence and linear independence of columns (rows) of a matrix.


Theorem 3.1 on the basis minor. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the basis minor is located.


Indeed, without loss of generality, we assume that in a matrix A of size m\times n the basis minor is located in the first r rows and first r columns. Consider the determinant


D=\begin(vmatrix)~ a_(11)&\cdots&a_(1r)\!\!&\vline\!\!&a_(1k)~\\ ~\vdots&\ddots &\vdots\!\!&\ vline\!\!&\vdots~\\ ~a_(r1)&\cdots&a_(rr)\!\!&\vline\!\!&a_(rk)~\\\hline ~a_(s1)&\cdots&a_ (sr)\!\!&\vline\!\!&a_(sk)~\end(vmatrix),


which is obtained by assigning to the basis minor of the matrix A the corresponding sth elements rows and k-th column. Note that for any 1\leqslant s\leqslant m and this determinant is equal to zero. If s\leqslant r or k\leqslant r , then the determinant D contains two identical rows or two identical columns. If s>r and k>r, then the determinant D is equal to zero, since it is a minor of (r+l)-ro order. Expanding the determinant along the last line, we get


a_(s1)\cdot D_(r+11)+\ldots+ a_(sr)\cdot D_(r+1r)+a_(sk)\cdot D_(r+1\,r+1)=0,


where D_(r+1\,j) are the algebraic complements of the elements of the last row. Note that D_(r+1\,r+1)\ne0 since this is a basis minor. That's why


a_(sk)=\lambda_1\cdot a_(s1)+\ldots+\lambda_r\cdot a_(sr), Where \lambda_j=-\frac(D_(r+1\,j))(D_(r+1\,r+1)),~j=1,2,\ldots,r.


Writing the last equality for s=1,2,\ldots,m, we get

\begin(pmatrix)a_(1k)\\\vdots\\a_(mk)\end(pmatrix)= \lambda_1\cdot\! \begin(pmatrix)a_(11)\\\vdots\\a_(m1)\end(pmatrix)+\ldots \lambda_r\cdot\! \begin(pmatrix)a_(1r)\\\vdots\\a_(mr)\end(pmatrix)\!.


those. kth column (for any 1\leqslant k\leqslant n) is a linear combination of the columns of the basis minor, which is what we needed to prove.


The basis minor theorem serves to prove the following important theorems.

Condition for the determinant to be zero

Theorem 3.2 (necessary and sufficient condition for the determinant to be zero). In order for a determinant to be equal to zero, it is necessary and sufficient that one of its columns (one of its rows) be a linear combination of the remaining columns (rows).


Indeed, necessity follows from the basis minor theorem. If the determinant of a square matrix of order n is equal to zero, then its rank is less than n, i.e. at least one column is not included in the basis minor. Then this chosen column, by Theorem 3.1, is a linear combination of the columns in which the basis minor is located. By adding, if necessary, to this combination other columns with zero coefficients, we obtain that the selected column is a linear combination of the remaining columns of the matrix. Sufficiency follows from the properties of the determinant. If, for example, the last column A_n of the determinant \det(A_1~A_2~\cdots~A_n) linearly expressed through the rest


A_n=\lambda_1\cdot A_1+\lambda_2\cdot A_2+\ldots+\lambda_(n-1)\cdot A_(n-1),


then adding to A_n column A_1 multiplied by (-\lambda_1), then column A_2 multiplied by (-\lambda_2), etc. column A_(n-1) multiplied by (-\lambda_(n-1)) we get the determinant \det(A_1~\cdots~A_(n-1)~o) with a null column that is equal to zero (property 2 of the determinant).

Invariance of matrix rank under elementary transformations

Theorem 3.3 (on the invariance of rank under elementary transformations). During elementary transformations of the columns (rows) of a matrix, its rank does not change.


Indeed, let it be. Let us assume that as a result of one elementary transformation of the columns of matrix A we obtained matrix A". If a type I transformation was performed (permutation of two columns), then any minor (r+l)-ro of the order of matrix A" is either equal to the corresponding minor (r+l )-ro of the order of the matrix A, or differs from it in sign (property 3 of the determinant). If a type II transformation was performed (multiplying the column by the number \lambda\ne0 ), then any minor (r+l)-ro of the order of the matrix A" is either equal to the corresponding minor (r+l)-ro of the order of the matrix A or different from it multiplier \lambda\ne0 (property 6 of the determinant).If the transformation was performed III type(adding to one column another column multiplied by the number \Lambda), then any minor of the (r+1)th order of the matrix A" is either equal to the corresponding minor of the (r+1)th order of the matrix A (property 9 of the determinant), or is equal to the sum of two minors (r+l)-ro of the order of the matrix A (property 8 of the determinant). Therefore, with an elementary transformation of any type, all minors (r+l)-ro of the order of the matrix A" are equal to zero, since all minors (r +l)-ro of the order of the matrix A . Thus, it has been proven that with elementary transformations of the columns the rank of the matrix cannot increase. Since transformations inverse to elementary ones are elementary, the rank of the matrix cannot decrease during elementary transformations of the columns, i.e. does not change. Similarly, it is proved that the rank of the matrix does not change under elementary row transformations.


Corollary 1. If one row (column) of a matrix is ​​a linear combination of its other rows (columns), then this row (column) can be deleted from the matrix without changing its rank.


Indeed, such a string can be made zero using elementary transformations, and a zero string cannot be included in the basis minor.


Corollary 2. If the matrix is ​​reduced to the simplest form (1.7), then


\operatorname(rg)A=\operatorname(rg)\Lambda=r\,.


Indeed, the matrix of the simplest form (1.7) has a basis minor of the rth order.


Corollary 3. Any non-singular square matrix is ​​elementary, in other words, any non-singular square matrix is ​​equivalent to an identity matrix of the same order.


Indeed, if A is a non-singular square matrix of nth order, then \operatorname(rg)A=n(see paragraph 3 of comments 3.2). Therefore, bringing the matrix A to the simplest form (1.7) by elementary transformations, we obtain the identity matrix \Lambda=E_n , since \operatorname(rg)A=\operatorname(rg)\Lambda=n(see Corollary 2). Therefore, matrix A is equivalent to the identity matrix E_n and can be obtained from it as a result of a finite number of elementary transformations. This means that matrix A is elementary.

Theorem 3.4 (about the rank of the matrix). The rank of a matrix is ​​equal to the maximum number of linearly independent rows of this matrix.


In fact, let \operatorname(rg)A=r. Then the matrix A has r linearly independent rows. These are the lines in which the base minor is located. If they were linearly dependent, then this minor would be equal to zero by Theorem 3.2, and the rank of the matrix A would not be equal to r. Let us show that r - maximum number linearly independent rows, i.e. any p rows are linearly dependent for p>r. Indeed, we form the matrix B from these p rows. Since matrix B is part of matrix A, then \operatorname(rg)B\leqslant \operatorname(rg)A=r

This means that at least one row of matrix B is not included in the basis minor of this matrix. Then, by the basis minor theorem, it is equal to a linear combination of the rows in which the basis minor is located. Therefore, the rows of matrix B are linearly dependent. Thus, the matrix A has at most r linearly independent rows.


Corollary 1. The maximum number of linearly independent rows in a matrix is ​​equal to the maximum number of linearly independent columns:


\operatorname(rg)A=\operatorname(rg)A^T.


This statement follows from Theorem 3.4 if we apply it to the rows of a transposed matrix and take into account that the minors do not change during transposition (property 1 of the determinant).


Corollary 2. During elementary transformations of the rows of a matrix, the linear dependence (or linear independence) of any system of columns of this matrix is ​​preserved.


In fact, let us choose any k columns of a given matrix A and compose the matrix B from them. Let the matrix A" be obtained as a result of elementary transformations of the rows of matrix A, and the matrix B" be obtained as a result of the same transformations of the rows of matrix B. By Theorem 3.3 \operatorname(rg)B"=\operatorname(rg)B. Therefore, if the columns of matrix B were linearly independent, i.e. k=\operatorname(rg)B(see Corollary 1), then the columns of the matrix B" are also linearly independent, since k=\operatorname(rg)B". If the columns of matrix B were linearly dependent (k>\operatorname(rg)B), then the columns of matrix B" are also linearly dependent (k>\operatorname(rg)B"). Consequently, for any columns of matrix A, linear dependence or linear independence is preserved under elementary row transformations.


Notes 3.3


1. By Corollary 1 of Theorem 3.4, the property of columns indicated in Corollary 2 is also true for any system of matrix rows if elementary transformations are performed only on its columns.


2. Corollary 3 of Theorem 3.3 can be refined as follows: any non-singular square matrix, using elementary transformations of only its rows (or only its columns), can be reduced to an identity matrix of the same order.


In fact, using only elementary row transformations, any matrix A can be reduced to the simplified form \Lambda (Fig. 1.5) (see Theorem 1.1). Since the matrix A is non-singular (\det(A)\ne0), its columns are linearly independent. This means that the columns of the matrix \Lambda are also linearly independent (Corollary 2 of Theorem 3.4). Therefore, the simplified form \Lambda of a non-singular matrix A coincides with its simplest form (Fig. 1.6) and is the identity matrix \Lambda=E (see Corollary 3 of Theorem 3.3). Thus, by transforming only the rows of a non-singular matrix, it can be reduced to the identity matrix. Similar reasoning is valid for elementary transformations of the columns of a non-singular matrix.

Rank of product and sum of matrices

Theorem 3.5 (on the rank of the product of matrices). The rank of the product of matrices does not exceed the rank of factors:


\operatorname(rg)(A\cdot B)\leqslant \min\(\operatorname(rg)A,\operatorname(rg)B\).


Indeed, let matrices A and B have sizes m\times p and p\times n . Let us assign to matrix A the matrix C=AB\colon\,(A\mid C). Of course that \operatorname(rg)C\leqslant\operatorname(rg)(A\mid C), since C is part of the matrix (A\mid C) (see paragraph 5 of remarks 3.2). Note that each column C_j, according to the matrix multiplication operation, is a linear combination of columns A_1,A_2,\ldots,A_p matrices A=(A_1~\cdots~A_p):


C_(j)=A_1\cdot b_(1j)+A_2\cdot b_(2j)+\ldots+A_(p)\cdot b_pj),\quad j=1,2,\ldots,n.


Such a column can be deleted from the matrix (A\mid C) without changing its rank (Corollary 1 of Theorem 3.3). Crossing out all columns of matrix C, we get: \operatorname(rg)(A\mid C)=\operatorname(rg)A. From here, \operatorname(rg)C\leqslant\operatorname(rg)(A\mid C)=\operatorname(rg)A. Similarly, we can prove that the condition is simultaneously satisfied \operatorname(rg)C\leqslant\operatorname(rg)B, and draw a conclusion about the validity of the theorem.


Consequence. If A is a non-singular square matrix, then \operatorname(rg)(AB)= \operatorname(rg)B And \operatorname(rg)(CA)=\operatorname(rg)C, i.e. the rank of a matrix does not change when it is multiplied from the left or right by a non-singular square matrix.


Theorem 3.6 on the rank of sums of matrices. The rank of the sum of matrices does not exceed the sum of the ranks of the terms:


\operatorname(rg)(A+B)\leqslant \operatorname(rg)A+\operatorname(rg)B.


Indeed, let's create a matrix (A+B\mid A\mid B). Note that each column of matrix A+B is a linear combination of columns of matrices A and B. That's why \operatorname(rg)(A+B\mid A\mid B)= \operatorname(rg)(A\mid B). Considering that the number of linearly independent columns in the matrix (A\mid B) does not exceed \operatorname(rg)A+\operatorname(rg)B, a \operatorname(rg)(A+B)\leqslant \operatorname(rg)(A+B\mid A\mid B)(see section 5 of Remarks 3.2), we obtain the inequality being proved.

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”