Несколько текстов для зачёта (Matrix Theory and Linear Algebra)
Посмотреть архив целикомMatrix Theory and Linear Algebra
I 

INTRODUCTION 
Matrix Theory and Linear Algebra, interconnected branches of mathematics that serve as fundamental tools in pure and applied mathematics and are becoming increasingly important in the physical, biological, and social sciences.
II 

MATRIX THEORY 
A matrix is a rectangular array of numbers or elements of a ring (see Algebra). One of the principal uses of matrices is in representing systems of equations of the first degree in several unknowns. Each matrix row represents one equation, and the entries in a row are the coefficients of the variables in the equations, in some fixed order.
A matrix is usually enclosed in brackets:
In the above matrices, a, b, and c are arbitrary numbers. In place of brackets, parentheses or double vertical lines may be used to enclose the arrays. The horizontal lines, called rows, are numbered from the top down; the vertical lines, or columns, are numbered from left to right; thus, 1 is the element in the second row, third column of M_{1}. A row or column is called a line.
The size of a matrix is given by the number of rows and columns, so that M_{1}, M_{2}, M_{3}, and M_{4} are, in that order, of sizes 3 × 3 (3 by 3), 3 × 3, 3 × 2, and 2 × 3. The general matrix of size m × n is frequently represented in doublesubscript notation, with the first subscript i indicating the row number, and the second subscript j indicating the column number; a_{23} is the element in the second row, third column. This general matrix
may be abbreviated to A = [a_{ij}], in which the ranges i = 1, 2, ..., m and j = 1, 2, ..., n should be explicitly given if they are not implied by the text. If m = n, the matrix is square, and the number of rows (or columns) is the order of the matrix. Two matrices, A = [a_{ij}] and B = [b_{ij}], are equal if and only if they are of the same size and if, for every i and j, a_{ij} = b_{ij}. The elements a_{11}, a_{22}, a_{33}, ... constitute the main or principal diagonal of the matrix A = [a_{ij}], if it is square. The transpose A^{T} of a matrix A is the matrix in which the ith row is the ith column of A and in which the jth column is the jth row of A; thus, from the matrix M_{3}, above,
which is the transpose of M_{3}.
Addition and multiplication of matrices can be defined so that certain sets of matrices form algebraic systems. Let the elements of the matrices considered be arbitrary real numbers, although the elements could have been chosen from other fields or rings. A zero matrix is one in which all the elements are zero; an identity matrix, I_{m} of order m, is a square matrix of order m in which all the elements are zero except those on the main diagonal, which are 1. The order of an identity matrix may be omitted if implied by the text, and I_{m} is then shortened to I.
The sum of two matrices is defined only if they are of the same size; if A = [a_{ij}] and B = [b_{ij}] are of the same size, then C = A + B is defined as the matrix [c_{ij}], in which c_{ij} = a_{ij} + b_{ij}; that is, two matrices of the same size are added merely by adding corresponding elements. Thus, in the matrices given above
The set of all matrices of a fixed size has the property that addition is closed, associative, and commutative; a unique matrix O exists such that for any matrix A, A + O = O + A = A; and, corresponding to any matrix A, there exists a unique matrix B such that A + B = B + A = O.
The product AB of two matrices, A and B, is defined only if the number of columns of the left factor A is the same as the number of rows of the right factor B; if A = [a_{ij}] is of size m × n and B = [b_{jk}] is of size n × p, the product AB = C = [c_{ik}] is of size m × p, and c_{ik} is given by
That is, the element in the ith row and kth column of the product is the sum of the products of the elements of the ith row of the left factor multiplied by the corresponding elements of the kth column of the right factor.
III 

LINEAR ALGEBRA 
The geometric concept of a vector as a line segment of given length and direction can be advantageously generalized as follows. An nvector (ndimensional vector, vector of order n, vector of length n) is an ordered set of n elements of a field. As in matrix theory, the elements are assumed to be real numbers. An nvector v is represented as:
v = [x_{1}, x_{2}, ..., x_{n}] 
In
particular, the lines of a matrix are vectors; the horizontal lines
are row vectors, the vertical lines are column vectors. The x's
are called the components of the vector.
Addition of vectors (of the same length) and scalar multiplication are defined as for matrices and satisfy the same laws. If
w = [y_{1}, y_{2}, ..., y_{n}] 
and
k is a
scalar (real number), then
v + w = [x_{1} + y_{1}, x_{2} + y_{2}, ..., x_{n} + y_{n}] 
kv = [kx_{1}, kx_{2}, ..., kx_{n}] 
If k_{1}, k_{2}, ..., k_{m} are scalars and v_{1}, v_{2}, ..., v_{m} are nvectors, the nvector
v = k_{1}v_{1} + k_{2}v_{2} + ... + k_{m}v_{m} 
is
called a linear combination of the vectors v_{1},
v_{2},
..., v_{m}.
The m nvectors
are linearly independent if the only linear combination equal to the
zero nvector,
0 = [0,0,
..., 0], is the one in which k_{1}
= k_{2}
= ... = k_{m}
= 0; otherwise, the vectors are linearly dependent. For example, if
v_{1}
= [0, 1, 2, 3], v_{2}
= [1, 2, 3, 4], v_{3}
= [2, 2, 4, 4], v_{4}
= [3, 4, 7, 8], then v_{1},
v_{2},
v_{3}
are linearly independent, because k_{1}v_{1}
+ k_{2}v_{2}
+ k_{3}v_{3}
= 0 if and only if k_{1}
= k_{2}
= k_{3}
= 0; v_{2},
v_{3},
and v_{4}
are linearly dependent because v_{2}
+ v_{3}
 v_{4}
= 0. If A is a matrix of rank r,
then at least one set of r
row, or column, vectors is a linearly independent set, and every set
of more than r
row, or column, vectors is a linearly dependent set.
A vector space V is a nonempty set of vectors (see Set Theory), with the properties that (1) if vV and w V, then v + wV, and (2) if v V and k is any scalar, then kvV. If S = {v_{i}} is a set of vectors, all of the same length, all linear combinations of the v's form a vector space said to be spanned by the v's. If the set B = {w_{1}} spans the same vector space V and is a linearly independent set, the set B is a basis for V. If a basis for V contains m vectors, every basis for V will contain exactly m vectors and V is called a vector space of dimension m. Two and threedimensional Euclidean spaces are vector spaces when their points are regarded as specified by ordered pairs or triples of real numbers. Matrices may be used to describe linear changes from one vector space into another.
Contributed By:
James Singer
Algebra
I 

INTRODUCTION 
Algebra, branch of mathematics in which letters are used to represent basic arithmetic relations. As in arithmetic, the basic operations of algebra are addition, subtraction, multiplication, division, and the extraction of roots. Arithmetic, however, cannot generalize mathematical relations such as the Pythagorean theorem, which states that the sum of the squares of the sides of any right triangle is also a square. Arithmetic can only produce specific instances of these relations (for example, 3, 4, and 5, where 3^{2} + 4^{2} = 5^{2}). But algebra can make a purely general statement that fulfills the conditions of the theorem: a^{2} + b^{2} = c^{2}. Any number multiplied by itself is termed squared and is indicated by a superscript number 2. For example, 3 × 3 is notated 3^{2}; similarly, a × a is equivalent to a^{2} (see Exponent; Power; Root).
Classical algebra, which is concerned with solving equations, uses symbols instead of specific numbers and uses arithmetic operations to establish ways of handling symbols (see Equation; Equations, Theory of). Modern algebra has evolved from classical algebra by increasing its attention to the structures within mathematics. Mathematicians consider modern algebra to be a set of objects with rules for connecting or relating them. As such, in its most general form, algebra may fairly be described as the language of mathematics.
II 

HISTORY 
The history of algebra began in ancient Egypt and Babylon, where people learned to solve linear (ax = b) and quadratic (ax^{2} + bx = c) equations, as well as indeterminate equations such as x^{2} + y^{2} = z^{2}, whereby several unknowns are involved. The ancient Babylonians solved arbitrary quadratic equations by essentially the same procedures taught today. They also could solve some indeterminate equations.
The Alexandrian mathematicians Hero of Alexandria and Diophantus continued the traditions of Egypt and Babylon, but Diophantus's book Arithmetica is on a much higher level and gives many surprising solutions to difficult indeterminate equations. This ancient knowledge of solutions of equations in turn found a home early in the Islamic world, where it was known as the “science of restoration and balancing.” (The Arabic word for restoration, aljabru, is the root of the word algebra.) In the 9th century, the Arab mathematician alKhwārizmī wrote one of the first Arabic algebras, a systematic exposé of the basic theory of equations, with both examples and proofs. By the end of the 9th century, the Egyptian mathematician Abu Kamil had stated and proved the basic laws and identities of algebra and solved such complicated problems as finding x, y, and z such that x + y + z = 10, x^{2} + y^{2} = z^{2}, and xz = y^{2}.
Чтобы не видеть никакую рекламу на сайте, нужно стать VIPпользователем.
Это можно сделать совершенно бесплатно. Читайте подробности тут.