Linear spaces. Subspaces

Systems of linear homogeneous equations

Formulation of the problem. Find some basis and determine the dimension of the linear space of solutions of the system

Solution plan.

1. Write down the system matrix:

and with the help of elementary transformations we transform the matrix to a triangular form, i.e. to such a form when all elements below the main diagonal are equal to zero. The rank of the system matrix is ​​equal to the number of linearly independent rows, i.e., in our case, the number of rows in which non-zero elements remain:

The dimension of the solution space is . If , then the homogeneous system has a unique zero solution, if , then the system has an infinite number of solutions.

2. Choose basic and free variables. Free variables are denoted by . Then we express the basic variables in terms of the free ones, thus obtaining the general solution of a homogeneous system of linear equations.

3. We write down the basis of the solution space of the system by sequentially setting one of the free variables equal to one, and the rest to zero. The dimension of the linear solution space of the system is equal to the number of basis vectors.

Note. The elementary matrix transformations include:

1. multiplication (division) of a string by a multiplier other than zero;

2. addition to any line of another line, multiplied by any number;

3. permutation of lines in places;

4. transformations 1–3 for columns (in the case of solving systems of linear equations, elementary transformations of columns are not used).

Task 3. Find some basis and determine the dimension of the linear space of solutions of the system.

We write out the matrix of the system and, using elementary transformations, we bring it to a triangular form:

We suppose then


When we analyzed the concepts of an n-dimensional vector and introduced operations on vectors, we found out that the set of all n-dimensional vectors generates a linear space. In this article, we will talk about the most important related concepts - about the dimension and the basis of a vector space. We also consider the theorem on the expansion of an arbitrary vector in terms of a basis and the connection between different bases of an n-dimensional space. Let us analyze in detail the solutions of typical examples.

Page navigation.

Concept of vector space dimension and basis.

The concepts of dimension and basis of a vector space are directly related to the concept of a linearly independent system of vectors, so we recommend, if necessary, refer to the article linear dependence of a system of vectors, properties of linear dependence and independence.

Definition.

Dimension of the vector space is called the number equal to the maximum number of linearly independent vectors in this space.

Definition.

Vector space basis is an ordered set of linearly independent vectors of this space, the number of which is equal to the dimension of the space.

We present some arguments based on these definitions.

Consider the space of n -dimensional vectors.

Let us show that the dimension of this space is equal to n .

Let us take a system of n unit vectors of the form

Let's take these vectors as rows of the matrix A. In this case, matrix A will be an n by n identity matrix. The rank of this matrix is ​​n (if necessary, see the article). Therefore, the system of vectors is linearly independent, and no vector can be added to this system without violating its linear independence. Since the number of vectors in the system equals n, then the dimension of the space of n-dimensional vectors is n, and the unit vectors are the basis of this space.

From the last statement and the definition of the basis, we can conclude that any system of n-dimensional vectors whose number of vectors is less than n is not a basis.

Now let's swap the first and second vectors of the system . It is easy to show that the resulting system of vectors is also a basis of an n-dimensional vector space. Let's compose a matrix, taking it as rows vectors of this system. This matrix can be obtained from the identity matrix by swapping the first and second rows, hence its rank will be n . Thus, a system of n vectors is linearly independent and is a basis of an n-dimensional vector space.

If we swap other vectors of the system , we get another basis.

If we take a linearly independent system of non-unit vectors, then it is also the basis of an n-dimensional vector space.

Thus, a vector space of dimension n has as many bases as there are linearly independent systems of n n -dimensional vectors.

If we talk about a two-dimensional vector space (that is, about a plane), then its basis is any two non-collinear vectors. The basis of a three-dimensional space is any three non-coplanar vectors.

Let's look at a few examples.

Example.

Are vectors the basis of a 3D vector space?

Solution.

Let us examine this system of vectors for a linear dependence. To do this, we will compose a matrix, the rows of which will be the coordinates of the vectors, and find its rank:


Thus, the vectors a , b and c are linearly independent and their number is equal to the dimension of the vector space, therefore, they are the basis of this space.

Answer:

Yes, they are.

Example.

Can a system of vectors be the basis of a vector space?

Solution.

This system of vectors is linearly dependent, since the maximum number of linearly independent three-dimensional vectors is three. Therefore, this system of vectors cannot be a basis of a three-dimensional vector space (although a subsystem of the original system of vectors is a basis).

Answer:

No, he can not.

Example.

Make sure the vectors

can be a basis of a four-dimensional vector space.

Solution.

Let's make a matrix, taking it as rows of the original vectors:

Let's find:

Thus, the system of vectors a, b, c, d is linearly independent and their number is equal to the dimension of the vector space, therefore, a, b, c, d are its basis.

Answer:

The original vectors are indeed the basis of a four-dimensional space.

Example.

Do vectors form the basis of a 4-dimensional vector space?

Solution.

Even if the original system of vectors is linearly independent, the number of vectors in it is not enough to be the basis of a four-dimensional space (the basis of such a space consists of 4 vectors).

Answer:

No, it doesn't.

Decomposition of a vector in terms of a vector space basis.

Let arbitrary vectors are the basis of an n -dimensional vector space. If we add some n-dimensional vector x to them, then the resulting system of vectors will be linearly dependent. From the properties of linear dependence, we know that at least one vector of a linearly dependent system is linearly expressed in terms of the others. In other words, at least one of the vectors of a linearly dependent system is expanded in terms of the rest of the vectors.

Thus we come to a very important theorem.

Theorem.

Any vector of an n-dimensional vector space is uniquely decomposed in terms of a basis.

Proof.

Let - basis of n -dimensional vector space. Let's add an n-dimensional vector x to these vectors. Then the resulting system of vectors will be linearly dependent and the vector x can be linearly expressed in terms of the vectors : , where are some numbers. So we got the expansion of the vector x in terms of the basis. It remains to prove that this decomposition is unique.

Assume that there is another decomposition , where - some numbers. Subtract from the left and right parts of the last equality, respectively, the left and right parts of the equality:

Since the system of basis vectors is linearly independent, then, by the definition of linear independence of a system of vectors, the resulting equality is possible only when all coefficients are equal to zero. Therefore, , which proves the uniqueness of the expansion of the vector in terms of the basis.

Definition.

The coefficients are called coordinates of the vector x in the basis .

After getting acquainted with the theorem on the expansion of a vector in terms of a basis, we begin to understand the essence of the expression “we are given an n-dimensional vector ". This expression means that we are considering a vector x of an n -dimensional vector space whose coordinates are given in some basis. At the same time, we understand that the same vector x in another basis of the n-dimensional vector space will have coordinates different from .

Consider the following problem.

Let, in some basis of an n-dimensional vector space, we are given a system of n linearly independent vectors

and vector . Then the vectors are also a basis of this vector space.

Let us need to find the coordinates of the vector x in the basis . Let's denote these coordinates as .

Vector x in basis has an idea. We write this equality in coordinate form:

This equality is equivalent to a system of n linear algebraic equations with n unknown variables :

The main matrix of this system has the form

Let's denote it as A. The columns of matrix A are vectors of a linearly independent system of vectors , so the rank of this matrix is ​​n , hence its determinant is non-zero. This fact indicates that the system of equations has a unique solution that can be found by any method, for example, or .

So the desired coordinates will be found vector x in the basis .

Let's analyze the theory with examples.

Example.

In some basis of the three-dimensional vector space, the vectors

Make sure the vector system is also a basis of this space and find the coordinates of the vector x in this basis.

Solution.

For a system of vectors to be the basis of a three-dimensional vector space, it must be linearly independent. Let's find out by determining the rank of the matrix A , whose rows are vectors . We find the rank by the Gauss method


therefore, Rank(A) = 3 , which shows the linear independence of the system of vectors .

So vectors are the basis. Let the vector x have coordinates in this basis. Then, as we showed above, the relationship of the coordinates of this vector is given by the system of equations

Substituting into it the values ​​known from the condition, we obtain

Let's solve it by Cramer's method:

Thus, the vector x in the basis has coordinates .

Answer:

Example.

In some basis four-dimensional vector space is given a linearly independent system of vectors

It is known that . Find coordinates of vector x in basis .

Solution.

Since the system of vectors is linearly independent by assumption, then it is a basis of a four-dimensional space. Then the equality means that the vector x in the basis has coordinates. Denote the coordinates of the vector x in the basis How .

The system of equations that defines the relationship of the coordinates of the vector x in bases And has the form

We substitute the known values ​​\u200b\u200binto it and find the desired coordinates:

Answer:

.

Communication between bases.

Let two linearly independent systems of vectors be given in some basis of an n-dimensional vector space

And

that is, they are also bases of this space.

If - vector coordinates in basis , then the relationship of coordinates And is given by a system of linear equations (we talked about this in the previous paragraph):

, which in matrix form can be written as

Similarly, for a vector, we can write

The previous matrix equalities can be combined into one, which essentially defines the relationship of the vectors of two different bases

Similarly, we can express all basis vectors through the basis :

Definition.

Matrix called transition matrix from the basis to basis , then the equality

Multiplying both sides of this equation on the right by

we get

Let's find the transition matrix, while we will not dwell on finding the inverse matrix and multiplying matrices (see, if necessary, articles and):

It remains to find out the relationship of the coordinates of the vector x in the given bases.

Let the vector x have coordinates in the basis, then

and in the basis the vector x has coordinates , then

Since the left parts of the last two equalities are the same, we can equate the right parts:

If we multiply both sides on the right by

then we get


On the other side

(find the inverse matrix yourself).
The last two equalities give us the desired relationship of the coordinates of the vector x in the bases and .

Answer:

The transition matrix from basis to basis has the form
;
the coordinates of the vector x in bases and are related by the relations

or
.

We considered the concepts of dimension and basis of a vector space, learned how to decompose a vector according to a basis, and discovered a connection between different bases of an n-dimensional space of vectors through a transition matrix.

Page 1

Subspace, its basis and dimension.

Let L is the linear space over the field P And A is a subset of L. If A itself constitutes a linear space over the field P for the same operations as L, That A called a subspace of space L.

According to the definition of a linear space, so that A was a subspace to check the feasibility in A operations:

1) :
;

2)
:
;

and check that the operations in A subject to eight axioms. However, the latter will be redundant (due to the fact that these axioms hold in L), i.e. the following

Theorem. Let L be a linear space over a field P and
. A set A is a subspace of L if and only if the following requirements are met:

1. :
;

2.
:
.

Statement. If Ln-dimensional linear space and A its subspace, then A is also a finite-dimensional linear space and its dimension does not exceed n.

P example 1. Is the set S of all vectors of the plane, each of which lies on one of the coordinate axes 0x or 0y, a subspace of the space of segment vectors V 2?

Solution: Let
,
And
,
. Then
. Therefore, S is not a subspace .

Example 2 V 2 set of vector segments of the plane S all plane vectors whose beginnings and ends lie on a given line l this plane?

Solution.

E sli vector
multiply by a real number k, then we get the vector
, also belonging to S. If And are two vectors from S, then
(according to the rule of addition of vectors on a straight line). Therefore, S is a subspace .

Example 3 Is a linear subspace of a linear space V 2 a bunch of A all vectors of the plane whose ends lie on the given line l, (assume that the origin of any vector coincides with the origin)?

R solution.

In the case where the direct l does not pass through the origin A linear subspace of the space V 2 is not, because
.

In the case where the direct l passes through the origin, the set A is a linear subspace of the space V 2 , because
and when multiplying any vector
to a real number α out of the field R we get
. Thus, the linear space requirements for the set A completed.

Example 4 Let a system of vectors be given
from linear space L over the field P. Prove that the set of all possible linear combinations
with coefficients
from P is a subspace L(this is a subspace A is called the subspace generated by the system of vectors
or linear shell this system of vectors, and are denoted as follows:
or
).

Solution. Indeed, since , then for any elements x, yA we have:
,
, Where
,
. Then

Because
, That
, That's why
.

Let us check the feasibility of the second condition of the theorem. If x is any vector from A And t- any number from P, That . Because the
And
,
, That
,
, That's why
. Thus, according to the theorem, the set A is a subspace of a linear space L.

For finite-dimensional linear spaces, the converse is also true.

Theorem. Any subspace A linear space L over the field is the linear span of some system of vectors.

When solving the problem of finding the basis and dimension of the linear shell, the following theorem is used.

Theorem. Linear shell basis
coincides with the basis of the system of vectors
. Dimension of the linear shell
coincides with the rank of the system of vectors
.

Example 4 Find the basis and dimension of a subspace
linear space R 3 [ x] , If
,
,
,
.

Solution. It is known that vectors and their coordinate rows (columns) have the same properties (with respect to linear dependence). We make a matrix A=
from coordinate columns of vectors
in basis
.

Find the rank of a matrix A.

. M 3 =
.
.

Therefore, the rank r(A)= 3. So, the rank of the system of vectors
is equal to 3. Hence, the dimension of the subspace S is equal to 3, and its basis consists of three vectors
(because in the basic minor
only the coordinates of these vectors are included)., . This system of vectors is linearly independent. Indeed, let .

AND
.

It can be verified that the system
linearly dependent for any vector x from H. This proves that
maximum linearly independent system of subspace vectors H, i.e.
- basis in H and dim H=n 2 .

Page 1

1. Let the subspace L = L(A 1 , A 2 , …, a m) , that is L is the linear shell of the system A 1 , A 2 , …, a m; vectors A 1 , A 2 , …, a m is the system of generators of this subspace. Then the basis L is the basis of the system of vectors A 1 , A 2 , …, a m, that is, the basis of the system of generators. Dimension L is equal to the rank of the system of generators.

2. Let the subspace L is the sum of subspaces L 1 and L 2. The system of generating subspaces can be obtained by combining the systems of generating subspaces, after which the basis of the sum is found. The dimension of the sum is found by the following formula:

dim(L 1 + L 2) = dimL 1 + dimL 2 – dim(L 1 Z L 2).

3. Let the sum of subspaces L 1 and L 2 straight line, that is L = L 1 Å L 2. Wherein L 1 Z L 2 = {O) And dim(L 1 Z L 2) = 0. The basis of the direct sum is equal to the union of the bases of the summands. The dimension of the direct sum is equal to the sum of the dimensions of the terms.

4. Let us give an important example of a subspace and a linear manifold.

Consider a homogeneous system m linear equations with n unknown. Lots of solutions M 0 of this system is a subset of the set R n and is closed under the addition of vectors and their multiplication by a real number. This means that this is a set M 0 - subspace of space R n. The basis of the subspace is the fundamental set of solutions of the homogeneous system, the dimension of the subspace is equal to the number of vectors in the fundamental set of solutions of the system.

A bunch of M common system solutions m linear equations with n unknown is also a subset of the set R n and is equal to the sum of the set M 0 and vector A, Where A is some particular solution of the original system, and the set M 0 is the set of solutions of a homogeneous system of linear equations accompanying this system (it differs from the original system only in free terms),

M = A + M 0 = {A = m, m Î M 0 }.

This means that many M is a linear manifold of the space R n with shift vector A and direction M 0 .

Example 8.6. Find the basis and dimension of a subspace given by a homogeneous system of linear equations:

Solution. Let us find the general solution of this system and its fundamental set of solutions: With 1 = (–21, 12, 1, 0, 0), With 2 = (12, –8, 0, 1, 0), With 3 = (11, –8, 0, 0, 1).

The subspace basis is formed by vectors With 1 , With 2 , With 3 , its dimension is three.

End of work -

This topic belongs to:

Linear algebra

Kostroma State University named after N.A. Nekrasov.

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material turned out to be useful for you, you can save it to your page on social networks:

All topics in this section:

BBK 22.174ya73-5
M350 Printed by decision of the editorial and publishing council of KSU. N. A. Nekrasova Reviewer A. V. Cherednikov

BBK 22.174ya73-5
ã T. N. Matytsina, E. K. Korzhevina 2013 ã KSU im. N. A. Nekrasova, 2013

Union (or sum)
Definition 1.9. The union of sets A and B is the set A È B, consisting of those and only those elements that belong to although

Intersection (or product)
Definition 1.10. The intersection of sets A and B is the set A Ç B, which consists of those and only those elements belonging to the same

Difference
Definition 1.11. The difference of sets A and B is the set A B, consisting of those and only those elements that belong to the set A

Cartesian product (or direct product)
Definition 1.14. An ordered pair (or pair) (a, b) is two elements a, b taken in a certain order. Pairs (a1

Properties of set operations
The properties of the union, intersection, and complement operations are sometimes called the laws of set algebra. Let us list the main properties of operations on sets. Let a universal set U

Method of mathematical induction
The method of mathematical induction is used to prove statements in which the natural parameter n is involved. The method of mathematical induction - the method of proving mathematics

Complex numbers
The concept of number is one of the main achievements of human culture. First, natural numbers N = (1, 2, 3, …, n, …) appeared, then integers Z = (…, –2, –1, 0, 1, 2, …), rational Q

Geometric interpretation of complex numbers
It is known that negative numbers were introduced in connection with the solution of linear equations with one variable. In specific problems, a negative answer was interpreted as the value of the directed quantity (

Trigonometric form of a complex number
A vector can be specified not only by coordinates in a rectangular coordinate system, but also by length and

Operations on complex numbers in trigonometric form
It is more convenient to perform addition and subtraction on complex numbers in algebraic form, and multiplication and division in trigonometric form. 1. Multiplications. Let two k

Exponentiation
If z = r(cosj + i×sinj), then zn = rn(cos(nj) + i×sin(nj)), where n Î

The exponential form of a complex number
It is known from mathematical analysis that e = , e is an irrational number. Eile

Relationship concept
Definition 2.1. An n-ary (or n-ary) relation P on sets A1, A2, …, An is any subset

Properties of Binary Relations
Let the binary relation P be given on a non-empty set A, i.e., P Í A2. Definition 2.9. Binary relation P on a set

Equivalence relation
Definition 2.15. A binary relation on a set A is called an equivalence relation if it is reflexive, symmetric, and transitive. Equivalent ratio

Functions
Definition 2.20. A binary relation ƒ н A ´ B is called a function from set A to set B if for any x

General concepts
Definition 3.1. A matrix is ​​a rectangular table of numbers containing m rows and n columns. The numbers m and n are called the order (or

Adding Matrices of the Same Type
You can only add matrices of the same type. Definition 3.12. The sum of two matrices A = (aij) and B = (bij), where i = 1,

Matrix addition properties
1) commutativity: "A, B: A + B \u003d B + A; 2) associativity:" A, B, C: (A + B) + C \u003d A

Multiplying a Matrix by a Number
Definition 3.13. The product of the matrix A = (aij) and the real number k is the matrix C = (сij) for which

Properties of multiplying a matrix by a number
1) "A: 1 × A = A; 2) " α, β Î R, " A: (αβ) × A = α × (β × A) = β ×

Matrix multiplication
We define the multiplication of two matrices; To do this, we need to introduce some additional concepts. Definition 3.14. Matrices A and B are called consistent

Properties of matrix multiplication
1) Matrix multiplication is not commutative: A×B ≠ B×A. This property can be demonstrated with examples. Example 3.6. A)

Matrix transposition
Definition 3.16. The matrix Аt, obtained from the given by replacing each of its rows with a column with the same number, is called transposed to the given matrix A

Determinants of matrices of the second and third order
Each square matrix A of order n is assigned a number, which is called the determinant of this matrix. Designation: D, |A|, det A,

Definition 4.6.
1. For n = 1, the matrix A consists of one number: |A| = a11. 2. Let the determinant for a matrix of order (n – 1) be known. 3. Define

Qualifier properties
In order to calculate determinants of orders greater than 3, the properties of determinants and Laplace's theorem are used. Theorem 4.1 (Laplace). Determinant of a square matrix

Practical calculation of determinants
One way to calculate the determinants of an order above three is to expand it in some column or row. Example 4.4 Calculate the determinant D =

The concept of matrix rank
Let A be an m ´ n matrix. We choose arbitrarily k rows and k columns in this matrix, where 1 ≤ k ≤ min(m, n).

Finding the rank of a matrix by the method of bordering minors
One of the methods for finding the rank of a matrix is ​​the enumeration of minors. This method is based on determining the rank of the matrix. The essence of the method is as follows. If there is at least one element

Finding the rank of a matrix using elementary transformations
Consider another way to find the rank of a matrix. Definition 5.4. The following transformations are called elementary matrix transformations: 1. multiply

The concept of an inverse matrix and how to find it
Let a square matrix A be given. Definition 5.7. Matrix A–1 is called the inverse of matrix A if A×A–1

Algorithm for finding the inverse matrix
Consider one of the ways to find the inverse of a given matrix with the help of algebraic additions. Let a square matrix A be given. 1. Find the determinant of the matrix |A|. EU

Finding the inverse matrix using elementary transformations
Consider another way to find the inverse matrix using elementary transformations. Let us formulate the necessary concepts and theorems. Definition 5.11. Matrix B name

Cramer method
Consider a system of linear equations in which the number of equations is equal to the number of unknowns, that is, m = n and the system looks like:

Inverse matrix method
The inverse matrix method is applicable to systems of linear equations in which the number of equations is equal to the number of unknowns and the determinant of the main matrix is ​​not equal to zero. Matrix notation system

Gauss method
To describe this method, which is suitable for solving arbitrary systems of linear equations, some new concepts are needed. Definition 6.7. 0× equation

Description of the Gauss method
The Gauss method - the method of successive elimination of unknowns - consists in the fact that, with the help of elementary transformations, the original system is reduced to an equivalent system of stepwise or t

Study of a system of linear equations
To investigate a system of linear equations means, without solving the system, to answer the question: is the system consistent or not, and if so, how many solutions does it have? Reply to this in

Homogeneous systems of linear equations
Definition 6.11. A system of linear equations is called homogeneous if its free terms are equal to zero. Homogeneous system of m linear equations

Properties of Solutions to a Homogeneous System of Linear Equations
1. If the vector а = (a1, a2, …, an) is a solution of a homogeneous system, then the vector k×а = (k×a1, k&t

Fundamental set of solutions to a homogeneous system of linear equations
Let M0 be the set of solutions of the homogeneous system (4) of linear equations. Definition 6.12. Vectors c1, c2, ..., c

Linear dependence and independence of a system of vectors
Let a1, a2, …, am be a set of m pieces of n-dimensional vectors, which is commonly referred to as a system of vectors, and k1

Properties of a linear dependence of a system of vectors
1) The system of vectors containing the zero vector is linearly dependent. 2) A system of vectors is linearly dependent if any of its subsystems is linearly dependent. Consequence. If si

Unit vector system
Definition 7.13. A system of unit vectors in the space Rn is a system of vectors e1, e2, …, en

Two linear dependence theorems
Theorem 7.1. If a larger system of vectors is linearly expressed in terms of a smaller one, then the larger system is linearly dependent. Let us formulate this theorem in more detail: let a1

Basis and rank of a system of vectors
Let S be a system of vectors in the space Rn; it can be either finite or infinite. S" is a subsystem of the system S, S" Ì S. Let us give two

Rank of the vector system
Let us give two equivalent definitions of the rank of a system of vectors. Definition 7.16. The rank of a system of vectors is the number of vectors in any basis of this system.

Practical finding of the rank and basis of a system of vectors
From the given system of vectors, we compose a matrix by arranging the vectors as rows of this matrix. We bring the matrix to a stepped form using elementary transformations over the rows of this matrix. At

Definition of a vector space over an arbitrary field
Let P be an arbitrary field. Examples of fields known to us are the field of rational, real, complex numbers. Definition 8.1. The set V is called in

The simplest properties of vector spaces
1) o is a zero vector (element), uniquely defined in an arbitrary vector space over the field. 2) For any vector a О V, there is a unique

Subspaces. Linear manifolds
Let V be a vector space, L Ì V (L is a subset of V). Definition 8.2. Subset L of the vector pro

Intersection and sum of subspaces
Let V be a vector space over a field P, L1 and L2 be its subspaces. Definition 8.3. Intersection subquery

Linear manifolds
Let V be a vector space, L a subspace, and let a be an arbitrary vector from the space V. Definition 8.6. By a linear manifold

Finite-dimensional vector spaces
Definition 8.7. A vector space V is called n-dimensional if it contains a linearly independent system of vectors consisting of n vectors, and for

Basis of a finite-dimensional vector space
V is a finite-dimensional vector space over the field P, S is a system of vectors (finite or infinite). Definition 8.10. The basis of the system S

Vector coordinates relative to the given basis
Consider a finite-dimensional vector space V of dimension n, the vectors e1, e2, …, en form its basis. Let a be prod

Vector coordinates in various bases
Let V be an n-dimensional vector space in which two bases are given: e1, e2, ..., en is the old basis, e "1, e

Euclidean vector spaces
Given a vector space V over the field of real numbers. This space can be either a finite-dimensional vector space of dimension n or infinite-dimensional.

Dot product in coordinates
In an n-dimensional Euclidean vector space V, a basis e1, e2, …, en is given. Vectors x and y decomposed into vectors

Metric concepts
In Euclidean vector spaces, one can pass from the introduced scalar product to the concepts of the norm of a vector and the angle between vectors. Definition 8.16. Norma (

Norm Properties
1) ||a|| = 0 w a = o. 2) ||la|| = |l|×||a||, since ||la|| =

Orthonormal basis of a Euclidean vector space
Definition 8.21. A basis of a Euclidean vector space is called orthogonal if the vectors of the basis are pairwise orthogonal, that is, if a1, a

Orthogonalization process
Theorem 8.12. Every n-dimensional Euclidean space has an orthonormal basis. Proof. Let a1, a2

Dot product in orthonormal basis
An orthonormal basis e1, e2, …, en of the Euclidean space V is given. Since (ei, ej) = 0 for i

Orthogonal subspace complement
V is a Euclidean vector space, L is its subspace. Definition 8.23. A vector a is said to be orthogonal to a subspace L if the vector

Relationship between the coordinates of a vector and the coordinates of its image
A linear operator j is given in the space V, and its matrix M(j) is found in some basis e1, e2, …, en. Let this be the basis

Similar matrices
Let us consider the set Pn´n of square matrices of order n with elements from an arbitrary field P. We introduce on this set the relative

Properties of the matrix similarity relation
1. Reflexivity. Any matrix is ​​similar to itself, i.e. A ~ A. 2. Symmetry. If matrix A is similar to B, then B is similar to A, i.e.

Properties of eigenvectors
1. Each eigenvector belongs to only one eigenvalue. Proof. Let x be an eigenvector with two eigenvalues

Characteristic polynomial of a matrix
Given a matrix A Î Pn´n (or A Î Rn´n). Define

Conditions under which a matrix is ​​similar to a diagonal matrix
Let A be a square matrix. We can assume that this is the matrix of some linear operator given in some basis. It is known that in another basis the matrix of the linear operator

Jordan normal form
Definition 10.5. A Jordan cell of order k related to the number l0 is a matrix of order k, 1 ≤ k ≤ n,

Reduction of a matrix to Jordan (normal) form
Theorem 10.3. The Jordan normal form is uniquely defined for a matrix up to the order in which the Jordan cells are located on the main diagonal. Etc

Bilinear Forms
Definition 11.1. A bilinear form is a function (mapping) f: V ´ V ® R (or C), where V is an arbitrary vector n

Properties of Bilinear Forms
Any bilinear form can be represented as a sum of symmetric skew-symmetric forms. With the chosen basis e1, e2, …, en in the vector

Transformation of a matrix of bilinear form when passing to a new basis. Rank of bilinear form
Let two bases e = (e1, e2, …, en) and f = (f1, f2,

Quadratic forms
Let A(x, y) be a symmetric bilinear form defined on a vector space V. Definition 11.6. By a quadratic form

Reduction of a quadratic form to a canonical form
Given a quadratic form (2) A(x, x) = , where x = (x1

Law of inertia of quadratic forms
It is established that the number of non-zero canonical coefficients of a quadratic form is equal to its rank and does not depend on the choice of a nondegenerate transformation by which the form A(x

Necessary and sufficient condition for a quadratic form to be sign-definite
Statement 11.1. In order for the quadratic form A(x, x) given in the n-dimensional vector space V to be sign-definite, it is necessary

A Necessary and Sufficient Condition for Quasi-Changing Quadratic Forms
Statement 11.3. In order for the quadratic form A(x, x) defined in the n-dimensional vector space V to be quasi-alternating (that is,

Sylvester's criterion for the sign-definiteness of a quadratic form
Let the form A(x, x) in the basis e = (e1, e2, …, en) be defined by the matrix A(e) = (aij)

Conclusion
Linear Algebra is a mandatory part of any advanced mathematics program. Any other section assumes the presence of knowledge, skills and abilities laid down during the teaching of this discipline.

Bibliographic list
Burmistrova E.B., Lobanov S.G. Linear algebra with elements of analytic geometry. - M .: Publishing House of the Higher School of Economics, 2007. Beklemishev D.V. Course of Analytic Geometry and Linear Algebra.

Linear algebra
Teaching aid Editor and proofreader G. D. Neganova Computer typesetting by T. N. Matytsina, E. K. Korzhevina

The linear space V is called n-dimensional, if it contains a system of n linearly independent vectors, and any system of more vectors is linearly dependent. The number n is called dimension (number of measurements) linear space V and is denoted \operatorname(dim)V. In other words, the dimension of a space is the maximum number of linearly independent vectors in that space. If such a number exists, then the space is said to be finite-dimensional. If for any natural number n in the space V there is a system consisting of n linearly independent vectors, then such a space is called infinite-dimensional (they write: \operatorname(dim)V=\infty). In what follows, unless otherwise stated, finite-dimensional spaces will be considered.


Basis n-dimensional linear space is an ordered set of n linearly independent vectors ( basis vectors).


Theorem 8.1 on the expansion of a vector in terms of a basis. If is a basis of an n-dimensional linear space V , then any vector \mathbf(v)\in V can be represented as a linear combination of basis vectors:


\mathbf(v)=\mathbf(v)_1\cdot \mathbf(e)_1+\mathbf(v)_2\cdot \mathbf(e)_2+\ldots+\mathbf(v)_n\cdot \mathbf(e)_n


and, moreover, in a unique way, i.e. odds \mathbf(v)_1, \mathbf(v)_2,\ldots, \mathbf(v)_n are defined unambiguously. In other words, any space vector can be expanded in a basis and, moreover, in a unique way.


Indeed, the dimension of the space V is equal to n . Vector system \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n linearly independent (this is the basis). After adding any vector \mathbf(v) to the basis, we get a linearly dependent system \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n, \mathbf(v)(since this system consists of (n + 1) vectors of n-dimensional space). By the property of 7 linearly dependent and linearly independent vectors, we obtain the conclusion of the theorem.


Consequence 1. If \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n is a basis of the space V , then V=\operatorname(Lin) (\mathbf(e)_1,\mathbf(e)_2, \ldots,\mathbf(e)_n), i.e. the linear space is the linear span of the basis vectors.


Indeed, to prove the equality V=\operatorname(Lin) (\mathbf(e)_1,\mathbf(e)_2, \ldots, \mathbf(e)_n) two sets, it suffices to show that the inclusions V\subset \operatorname(Lin)(\mathbf(e)_1,\mathbf(e)_2, \ldots,\mathbf(e)_n) and are executed at the same time. Indeed, on the one hand, any linear combination of vectors in a linear space belongs to the linear space itself, i.e. \operatorname(Lin)(\mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n)\subset V. On the other hand, by Theorem 8.1 any space vector can be represented as a linear combination of basis vectors, i.e. V\subset \operatorname(Lin)(\mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n). This implies the equality of the considered sets.


Consequence 2. If \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n is a linearly independent system of vectors in the linear space V and any vector \mathbf(v)\in V can be represented as a linear combination (8.4): \mathbf(v)=v_1\mathbf(e)_1+ v_2\mathbf(e)_2+\ldots+v_n\mathbf(e)_n, then the space V has dimension n , and the system \mathbf(e)_1,\mathbf(e)_2, \ldots,\mathbf(e)_n is its basis.


Indeed, in the space V there is a system of n linearly independent vectors, and any system \mathbf(u)_1,\mathbf(u)_2,\ldots,\mathbf(u)_n of more vectors (k>n) is linearly dependent, since each vector from this system is linearly expressed in terms of the vectors \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n. Means, \operatorname(dim) V=n And \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n- basis V .

Theorem 8.2 on the completion of a system of vectors to a basis. Any linearly independent system of k vectors in an n-dimensional linear space (1\leqslant k

Indeed, let be a linearly independent system of vectors in an n-dimensional space V~(1\leqslant k . Consider the linear span of these vectors: L_k=\operatorname(Lin)(\mathbf(e)_1,\mathbf(e)_2,\ldots, \mathbf(e)_k). Any vector \mathbf(v)\in L_k forms with vectors \mathbf(e)_1,\mathbf(e)_2,\ldots, \mathbf(e)_k linearly dependent system \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_k,\mathbf(v), since the vector \mathbf(v) is linearly expressed in terms of the others. Since there are n linearly independent vectors in an n-dimensional space, then L_k\ne V and there exists a vector \mathbf(e)_(k+1)\in V, which does not belong to L_k . Complementing with this vector the linearly independent system \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_k, we get a system of vectors \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_k,\mathbf(e)_(k+1), which is also linearly independent. Indeed, if it turned out to be linearly dependent, then it would follow from item 1 of Remarks 8.3 that \mathbf(e)_(k+1)\in \operatorname(Lin)(\mathbf(e)_1, \mathbf(e)_2, \ldots,\mathbf(e)_k)=L_k, which contradicts the condition \mathbf(e)_(k+1)\notin L_k. So, the system of vectors \mathbf(e)_1,\mathbf(e)_2,\ldots, \mathbf(e)_k, \mathbf(e)_(k+1) linearly independent. This means that the original system of vectors was supplemented with one vector without violation of linear independence. We continue similarly. Consider the linear span of these vectors: L_(k+1)=\operatorname(Lin) (\mathbf(e)_1, \mathbf(e)_2,\ldots, \mathbf(e)_k, \mathbf(e)_(k+1)). If L_(k+1)=V , then \mathbf(e)_1,\mathbf(e)_2, \ldots,\mathbf(e)_k, \mathbf(e)_(k+1)- the basis and the theorem are proved. If L_(k+1)\ne V , then we complete the system \mathbf(e)_1,\mathbf(e)_2, \ldots,\mathbf(e)_k,\mathbf(e)_(k+1) vector \mathbf(e)_(k+2)\notin L_(k+1) etc. The completion process will necessarily end, since the space V is finite-dimensional. As a result, we get the equality V=L_n=\operatorname(Lin) (\mathbf(e)_1,\ldots,\mathbf(e)_k,\ldots,\mathbf(e)_n), from which it follows that \mathbf(e)_1,\ldots,\mathbf(e)_k,\ldots,\mathbf(e)_n is the basis of the space V . The theorem has been proven.

Remarks 8.4


1. The basis of a linear space is defined ambiguously. For example, if \mathbf(e)_1,\mathbf(e)_2, \ldots, \mathbf(e)_n is the basis of the space V , then the system of vectors \lambda \mathbf(e)_1,\lambda \mathbf(e)_2,\ldots,\lambda \mathbf(e)_n for any \lambda\ne0 is also a basis of V . The number of basis vectors in different bases of the same finite-dimensional space is, of course, the same, since this number is equal to the dimension of the space.


2. In some spaces, often encountered in applications, one of the possible bases, the most convenient from a practical point of view, is called the standard one.


3. Theorem 8.1 allows us to say that a basis is a complete system of elements of a linear space, in the sense that any space vector is linearly expressed in terms of basis vectors.


4. If the set \mathbb(L) is a linear span \operatorname(Lin)(\mathbf(v)_1,\mathbf(v)_2,\ldots,\mathbf(v)_k), then the vectors \mathbf(v)_1,\mathbf(v)_2,\ldots,\mathbf(v)_k are called generators of the set \mathbb(L) . Corollary 1 of Theorem 8.1, by virtue of the equality V=\operatorname(Lin) (\mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n) allows us to say that the basis is minimal generating system linear space V , since it is impossible to reduce the number of generators (remove at least one vector from the set \mathbf(e)_1, \mathbf(e)_2,\ldots,\mathbf(e)_n) without violating the equality V=\operatorname(Lin)(\mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n).


5. Theorem 8.2 allows us to say that the basis is maximum linearly independent system of vectors linear space, since the basis is a linearly independent system of vectors, and it cannot be supplemented by any vector without losing linear independence.


6. It is convenient to use Corollary 2 of Theorem 8.1 to find the basis and dimension of a linear space. In some textbooks, it is taken to define the basis, namely: linearly independent system \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n vectors of a linear space is called a basis if any vector of the space is linearly expressed in terms of the vectors \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n. The number of basis vectors determines the dimension of the space. Of course, these definitions are equivalent to those given above.

Examples of bases for linear spaces

We indicate the dimension and basis for the examples of linear spaces considered above.


1. Zero linear space \(\mathbf(o)\) does not contain linearly independent vectors. Therefore, the dimension of this space is assumed to be zero: \dim\(\mathbf(o)\)=0. This space has no basis.


2. The spaces V_1,\,V_2,\,V_3 have dimensions 1, 2, 3 respectively. Indeed, any non-zero vector of the space V_1 , forms a linearly independent system (see item 1. of Remarks 8.2), and any two non-zero vectors of the space V_1 are collinear, i.e. are linearly dependent (see Example 8.1). Therefore, \dim(V_1)=1 , and the basis of the space V_1 is any non-zero vector. Similarly, we prove that \dim(V_2)=2 and \dim(V_3)=3 . The basis of the space V_2 is any two non-collinear vectors taken in a certain order (one of them is considered the first basis vector, the other - the second). The basis of the space V_3 is any three non-coplanar (not lying in the same or parallel planes) vectors, taken in a certain order. The standard basis in V_1 is the unit vector \vec(i) on the line. The standard basis in V_2 is the basis \vec(i),\,\vec(j), consisting of two mutually perpendicular unit vectors of the plane. The standard basis in the space V_3 is the basis \vec(i),\,\vec(j),\,\vec(k), composed of three unit pairwise perpendicular vectors forming the right triple.


3. The space \mathbb(R)^n contains no more than n linearly independent vectors. Indeed, let's take k columns from \mathbb(R)^n and make a matrix of sizes n\times k from them. If k>n , then the columns are linearly dependent by Theorem 3.4 on the rank of a matrix. Hence, \dim(\mathbb(R)^n)\leqslant n. In the space \mathbb(R)^n it is not difficult to find n linearly independent columns. For example, the columns of the identity matrix


\mathbf(e)_1=\begin(pmatrix)1\\0\\\vdots\\0\end(pmatrix)\!,\quad \mathbf(e)_2= \begin(pmatrix)0\\1\ \\vdots\\0\end(pmatrix)\!,\quad \ldots,\quad \mathbf(e)_n= \begin(pmatrix) 0\\0\\\vdots\\1 \end(pmatrix)\ !.


are linearly independent. Hence, \dim(\mathbb(R)^n)=n. The space \mathbb(R)^n is called n-dimensional real arithmetic space. The specified set of vectors is considered to be the standard basis of the space \mathbb(R)^n . Similarly, it is proved that \dim(\mathbb(C)^n)=n, so the space \mathbb(C)^n is called n-dimensional complex arithmetic space.


4. Recall that any solution of the homogeneous system Ax=o can be represented as x=C_1\varphi_1+C_2\varphi_2+\ldots+C_(n-r)\varphi_(n-r), Where r=\operatorname(rg)A, a \varphi_1,\varphi_2,\ldots,\varphi_(n-r)- fundamental decision system. Hence, \(Ax=o\)=\operatorname(Lin) (\varphi_1,\varphi_2,\ldots,\varphi_(n-r)), i.e. the basis of the space \(Ax=0\) of solutions of a homogeneous system is its fundamental system of solutions, and the dimension of the space is \dim\(Ax=o\)=n-r , where n is the number of unknowns, and r is the rank of the system matrix.


5. In the space M_(2\times3) of matrices of size 2\times3, 6 matrices can be selected:


\begin(gathered)\mathbf(e)_1= \begin(pmatrix)1&0&0\\0&0&0\end(pmatrix)\!,\quad \mathbf(e)_2= \begin(pmatrix)0&1&0\\0&0&0\end( pmatrix)\!,\quad \mathbf(e)_3= \begin(pmatrix) 0&0&1\\0&0&0\end(pmatrix)\!,\hfill\\ \mathbf(e)_4= \begin(pmatrix) 0&0&0\\ 1&0&0 \end(pmatrix)\!,\quad \mathbf(e)_5= \begin(pmatrix)0&0&0\\0&1&0\end(pmatrix)\!,\quad \mathbf(e)_6= \begin(pmatrix)0&0&0 \\0&0&1\end(pmatrix)\!,\hfill \end(gathered)


which are linearly independent. Indeed, their linear combination

\alpha_1\cdot \mathbf(e)_1+\alpha_2\cdot \mathbf(e)_2+\alpha_3\cdot \mathbf(e)_3+ \alpha_4\cdot \mathbf(e)_4+\alpha_5\cdot \mathbf(e)_5+ \alpha_6\cdot \mathbf(e)_6= \begin(pmatrix)\alpha_1&\alpha_2&\alpha_3\\ \alpha_4&\alpha_5&\alpha_6\end(pmatrix)


is equal to the zero matrix only in the trivial case \alpha_1=\alpha_2= \ldots= \alpha_6=0. Reading equality (8.5) from right to left, we conclude that any matrix from M_(2\times3) is linearly expressed in terms of the chosen 6 matrices, i.e. M_(2\times)= \operatorname(Lin) (\mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_6). Hence, \dim(M_(2\times3))=2\cdot3=6, and matrices \mathbf(e)_1, \mathbf(e)_2,\ldots,\mathbf(e)_6 are the (standard) basis of this space. Similarly, it is proved that \dim(M_(m\times n))=m\cdot n.


6. For any natural number n in the space P(\mathbb(C)) of polynomials with complex coefficients, one can find n linearly independent elements. For example, the polynomials \mathbf(e)_1=1, \mathbf(e)_2=z, \mathbf(e)_3=z^2,\,\ldots, \mathbf(e)_n=z^(n-1) are linearly independent, since their linear combination


a_1\cdot \mathbf(e)_1+a_2\cdot \mathbf(e)_2+\ldots+a_n\cdot \mathbf(e)_n= a_1+a_2z+\ldots+a_nz^(n-1)


is equal to the zero polynomial (o(z)\equiv0) only in the trivial case a_1=a_2=\ldots=a_n=0. Since this system of polynomials is linearly independent for any natural n, the space P(\mathbb(C)) is infinite-dimensional. Similarly, we conclude that the space P(\mathbb(R)) of polynomials with real coefficients has an infinite dimension. The space P_n(\mathbb(R)) of polynomials of degree at most n is finite-dimensional. Indeed, the vectors \mathbf(e)_1=1, \mathbf(e)_2=x, \mathbf(e)_3=x^2,\,\ldots, \mathbf(e)_(n+1)=x^n form a (standard) basis for this space, since they are linearly independent and any polynomial in P_n(\mathbb(R)) can be represented as a linear combination of these vectors:


a_nx^n+\ldots+a_1x+a_0=a_0\cdot \mathbf(e)_1+a_1 \mathbf(e)_2+\ldots+a_n\cdot \mathbf(e)_(n+1). Hence, \dim(P_n(\mathbb(R)))=n+1.


7. The space C(\mathbb(R)) of continuous functions is infinite-dimensional. Indeed, for any natural n the polynomials 1,x,x^2,\ldots, x^(n-1), considered as continuous functions, form linearly independent systems (see the previous example).


In space T_(\omega)(\mathbb(R)) trigonometric binomials (frequencies \omega\ne0 ) with real basis coefficients form monomials \mathbf(e)_1(t)=\sin\omega t,~\mathbf(e)_2(t)=\cos\omega t. They are linearly independent, since the identity equality a\sin\omega t+b\cos\omega t\equiv0 only possible in the trivial case (a=b=0) . Any function of the form f(t)=a\sin\omega t+b\cos\omega t linearly expressed in terms of the basic ones: f(t)=a\,\mathbf(e)_1(t)+b\,\mathbf(e)_2(t).


8. The space \mathbb(R)^X of real functions defined on the set X , depending on the domain of X, can be finite-dimensional or infinite-dimensional. If X is a finite set, then the space \mathbb(R)^X is finite-dimensional (for example, X=\(1,2,\ldots,n\)). If X is an infinite set, then the space \mathbb(R)^X is infinite-dimensional (for example, the space \mathbb(R)^N of sequences).


9. In the space \mathbb(R)^(+) any positive number \mathbf(e)_1 not equal to 1 can serve as a basis. Take, for example, the number \mathbf(e)_1=2 . Any positive number r can be expressed in terms of \mathbf(e)_1 , i.e. present in the form \alpha\cdot \mathbf(e)_1\colon r=2^(\log_2r)=\log_2r\ast2=\alpha_1\ast \mathbf(e)_1, where \alpha_1=\log_2r . Therefore, the dimension of this space is 1, and the number \mathbf(e)_1=2 is a basis.


10. Let \mathbf(e)_1,\mathbf(e)_2,\ldots,\mathbf(e)_n is a basis of the real linear space V . We define linear scalar functions on V by setting:


\mathcal(E)_i(\mathbf(e)_j)=\begin(cases)1,&i=j,\\ 0,&i\ne j.\end(cases)


At the same time, due to the linearity of the function \mathcal(E)_i , for an arbitrary vector we obtain \mathcal(E)(\mathbf(v))=\sum_(j=1)^(n)v_j \mathcal(E)(\mathbf(e)_j)=v_i.


So, n elements (covectors) are defined \mathcal(E)_1, \mathcal(E)_2, \ldots, \mathcal(E)_n dual space V^(\ast) . Let's prove that \mathcal(E)_1, \mathcal(E)_2,\ldots, \mathcal(E)_n- basis V^(\ast) .


First, we show that the system \mathcal(E)_1, \mathcal(E)_2,\ldots, \mathcal(E)_n linearly independent. Indeed, take a linear combination of these covectors (\alpha_1 \mathcal(E)_1+\ldots+\alpha_n\mathcal(E)_n)(\mathbf(v))= and equate it to the zero function


\mathbf(o)(\mathbf(v))~~ (\mathbf(o)(\mathbf(v))=0~ \forall \mathbf(v)\in V)\colon~ \alpha_1\mathcal(E )_1(\mathbf(v))+\ldots+\alpha_n\mathcal(E)_n(\mathbf(v))= \mathbf(o)(\mathbf(v))=0~~\forall \mathbf(v )\in V.


Substituting into this equality \mathbf(v)=\mathbf(e)_i,~ i=1,\ldots,n, we get \alpha_1=\alpha_2\cdot= \alpha_n=0. Therefore, the system of elements \mathcal(E)_1,\mathcal(E)_2,\ldots,\mathcal(E)_n space V^(\ast) is linearly independent, since the equality \alpha_1\mathcal(E)_1+\ldots+ \alpha_n\mathcal(E)_n =\mathbf(o) possible only in the trivial case.


Second, we prove that any linear function f\in V^(\ast) can be represented as a linear combination of covectors \mathcal(E)_1, \mathcal(E)_2,\ldots, \mathcal(E)_n. Indeed, for any vector \mathbf(v)=v_1 \mathbf(e)_1+v_2 \mathbf(e)_2+\ldots+v_n \mathbf(e)_n due to the linearity of the function f, we obtain:


\begin(aligned)f(\mathbf(v))&= f(v_1 \mathbf(e)_1+\ldots+v_n \mathbf(e)_n)= v_1 f(\mathbf(e)_1)+\ldots+ v_n f(\mathbf(e)_n)= f(\mathbf(e)_1)\mathcal(E)_1(\mathbf(v))+ \ldots+ f(\mathbf(e)_n)\mathcal(E) _n(\mathbf(v))=\\ &=(f(\mathbf(e)_1)\mathcal(E)_1+\ldots+ f(\mathbf(e)_n)\mathcal(E)_n)(\mathbf (v))= (\beta_1\mathcal(E)_1+ \ldots+\beta_n\mathcal(E)_n) (\mathbf(v)),\end(aligned)


those. the function f is represented as a linear combination f=\beta_1 \mathcal(E)_1+\ldots+\beta_n\mathcal(E)_n functions \mathcal(E)_1,\mathcal(E)_2,\ldots, \mathcal(E)_n(numbers \beta_i=f(\mathbf(e)_i) are the coefficients of the linear combination). Therefore, the system of covectors \mathcal(E)_1, \mathcal(E)_2,\ldots, \mathcal(E)_n is a basis of the dual space V^(\ast) and \dim(V^(\ast))=\dim(V)(for a finite-dimensional space V ).

If you notice an error, typo or have suggestions, write in the comments.