If you see this, something is wrong
To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.
Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.
Generally speaking, anything that is blue is clickable.
Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.
Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.
Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.
By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.
If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.
The blue button below that says "table of contents" is your tool to navigate in a publication.
The left arrow brings you to the previous document in the publication, and the right one brings you to the next. Both cycle over the publication list.
The middle button that says "table of contents" reveals the publication table of contents. This table is hierarchical structured. It has sections, and sections can be collapsed or expanded. If you are a registered user, you can save the layout of the table of contents.
First published on Saturday, Jul 6, 2024 and last modified on Thursday, Apr 10, 2025
Mathedu SAS
In this paper, we shall discover a way to draw colum vectors with two real elements that will prove very useful to manipulate these vectors more synthetically, with a notation \( \overrightarrow{u}\) rather then the lengthy matrix notation.
With that notation, we shall develop many useful properties for the element by element addition, subtraction, multiplication by a scalar and division by a non zero scalar.
In the plane, we define an origin and a canonical base as:
The origin \( O\) is the null vector \( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) .
The \( x\) axis is directed by the vector \( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\) (abscissa \( 1\) , ordinate \( 0\) ).
The \( y\) axis is directed by the vector \( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\) (abscissa \( 0\) , ordinate \( 1\) ).
The canonical base \( (\overrightarrow{i},\overrightarrow{j})\) is made of two column vectors that are graphically represented on the figure 1.
The abscissa and ordinates of the vectors drawn on figure 2 are described in the table 1 below.
Note that the abscissa of a column vector is its first element, and its ordinate is its second element.
| Vector | Abscissa | Ordinate | Vector | Abscissa | Ordinate |
| \( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) | 0 | 0 | |||
| \( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\) | \( 1\) | \( 0\) | \( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\) | \( 0\) | \( 1\) |
| \( \overrightarrow{v}_1=\begin{bmatrix}1\\ 1\end{bmatrix}\) | \( 1\) | \( 1\) | \( \overrightarrow{v}_2=\begin{bmatrix}-1\\ 1\end{bmatrix}\) | \( -1\) | \( 1\) |
| \( \overrightarrow{v}_3=\begin{bmatrix}-1\\ 0\end{bmatrix}\) | \( -1\) | \( 0\) | \( \overrightarrow{v}_4=\begin{bmatrix}-1\\ {-1}\end{bmatrix}\) | \( -1\) | \( -1\) |
| \( \overrightarrow{v}_5=\begin{bmatrix}0\\ {-1}\end{bmatrix}\) | \( 0\) | \( -1\) | \( \overrightarrow{v}_6=\begin{bmatrix}1\\ {-1}\end{bmatrix}\) | \( 1\) | \( -1\) |
| \( \overrightarrow{u}_1=\begin{bmatrix}2\\ 0\end{bmatrix}\) | \( 2\) | \( 0\) | \( \overrightarrow{u}_2=\begin{bmatrix}2\\ 1\end{bmatrix}\) | \( 2\) | \( 1\) |
| \( \overrightarrow{u}_3=\begin{bmatrix}2\\ 2\end{bmatrix}\) | \( 2\) | \( 2\) | \( \overrightarrow{u}_4=\begin{bmatrix}1\\ 2\end{bmatrix}\) | \( 1\) | \( 2\) |
| \( \overrightarrow{u}_5=\begin{bmatrix}0\\ 2\end{bmatrix}\) | \( 0\) | \( 2\) | \( \overrightarrow{u}_6=\begin{bmatrix}-1\\ 2\end{bmatrix}\) | \( -1\) | \( 2\) |
| \( \overrightarrow{u}_7=\begin{bmatrix}-2\\ 2\end{bmatrix}\) | \( -2\) | \( 2\) | \( \overrightarrow{u}_8=\begin{bmatrix}-2\\ 1\end{bmatrix}\) | \( -2\) | \( 1\) |
| \( \overrightarrow{u}_9=\begin{bmatrix}-2\\ 0\end{bmatrix}\) | \( -2\) | \( 0\) | \( \overrightarrow{u}_{10}=\begin{bmatrix}-2\\ {-1}\end{bmatrix}\) | \( -2\) | \( -1\) |
| \( \overrightarrow{u}_{11}=\begin{bmatrix}-2\\ {-2}\end{bmatrix}\) | \( -2\) | \( -2\) | \( \overrightarrow{u}_{12}=\begin{bmatrix}-1\\ {-2}\end{bmatrix}\) | \( -1\) | \( -2\) |
| \( \overrightarrow{u}_{13}=\begin{bmatrix}0\\ {-2}\end{bmatrix}\) | \( 0\) | \( -2\) | \( \overrightarrow{u}_{14}=\begin{bmatrix}1\\ {-2}\end{bmatrix}\) | \( 1\) | \( -2\) |
| \( \overrightarrow{u}_{15}=\begin{bmatrix}2\\ {-2}\end{bmatrix}\) | \( 2\) | \( -2\) | \( \overrightarrow{u}_{16}=\begin{bmatrix}2\\ {-1}\end{bmatrix}\) | \( 2\) | \( -1\) |
We shall give the geometrical meaning of the coordinates, abscissa and ordinate, of a vector in different configurations in respect to the axes.
Theorem 1
For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in(\mathbb{R}_+^*)^2\) are real numbers such as \( a>0\) and \( b>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}=a\) is the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .
\( y_{u}=b\) is the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .
Proof
It is the definition of the coordinates of a vector, combined with the fact that both its abscissa and ordinate are positive.
Theorem 2
For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in\mathbb{R}_-^*\times\mathbb{R}_+^*\) are real numbers such as \( a<0\) and \( b>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}=a\) is the opposite of the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .
\( y_{u}=b\) is the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .
Proof
It is the definition of the coordinates of a vector, combined with the fact that its abscissa is negative and its ordinate is positive.
Theorem 3
For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in(\mathbb{R}_-^*)^2\) are real numbers such as \( a<0\) and \( b<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}=a\) is the opposite of the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .
\( y_{u}=b\) is the opposite of the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .
Proof
It is the definition of the coordinates of a vector, combined with the fact that both its abscissa and ordinate are negative.
Theorem 4
For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in\mathbb{R}_+^*\times\mathbb{R}_-^*\) are real numbers such as \( a>0\) and \( b<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}=a\) is the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .
\( y_{u}=b\) is the opposite of the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .
Proof
It is the definition of the coordinates of a vector, combined with the fact that its abscissa is positive and its ordinate is negative.
Theorem 5
For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ 0\end{bmatrix}\) , where \( a\in\mathbb{R}_+^*\) is a real number such as \( a>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}=a\) is the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the length of the vector \( \overrightarrow{u}\) .
\( y_{u}\) is equal to \( 0\) .
Proof
It is the definition of the coordinates of a vector aligned with the \( x\) axis, combined with the fact that its abscissa is positive.
Theorem 6
For a vector \( \overrightarrow{u}=\begin{bmatrix}0\\ b\end{bmatrix}\) , where \( b\in\mathbb{R}_+^*\) is a real number such as \( b>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}\) is equal to \( 0\)
\( y_{u}=b\) is the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the length of the vector \( \overrightarrow{u}\) . .
Proof
It is the definition of the coordinates of a vector aligned with the \( y\) axis, combined with the fact that its ordinate is positive.
Theorem 7
For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ 0\end{bmatrix}\) , where \( a\in\mathbb{R}_-^*\) is a real number such as \( a<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}=a\) is the opposite of the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the opposite of the length of the vector \( \overrightarrow{u}\) .
\( y_{u}\) is equal to \( 0\) .
Proof
It is the definition of the coordinates of a vector aligned with the \( x\) axis, combined with the fact that its abscissa is negative.
Theorem 8
For a vector \( \overrightarrow{u}=\begin{bmatrix}0\\ b\end{bmatrix}\) , where \( b\in\mathbb{R}_-^*\) is a real number such as \( b<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}\) is equal to \( 0\) .
\( y_{u}=b\) is the opposite of the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the opposite of the length of the vector \( \overrightarrow{u}\) .
Proof
It is the definition of the coordinates of a vector aligned with the \( y\) axis, combined with the fact that its ordinate is negative.
Theorem 9
For the null vector \( \overrightarrow{u}=\overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:
\( x_{u}\) is equal to \( 0\) .
\( y_{u}\) is equal to \( 0\) as well.
Proof
It is the definition of the coordinates of the null vector.
Definition 1
Assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) .
Then we define the sum and the difference of \( \overrightarrow{u}\) and \( \overrightarrow{v}\) the following way:
\( \overrightarrow{u}+\overrightarrow{v}=\begin{bmatrix}x+z\\ y+t\end{bmatrix}\) is the sum element by element of \( \overrightarrow{u}\) and \( \overrightarrow{v}\) ,
\( \overrightarrow{u}-\overrightarrow{v}=\begin{bmatrix}x-z\\ y-t\end{bmatrix}\) is the difference element by element of \( \overrightarrow{u}\) and \( \overrightarrow{v}\)
Theorem 10
Assume \( (x_1,y_1,x_2,y_2,x_3,y_3)\in\mathbb{R}^6\) are real numbers, and consider the three column vectors with 2 elements, \( \overrightarrow{u}_1=\begin{bmatrix}x_1\\ y_1\end{bmatrix}\) , \( \overrightarrow{u}_2=\begin{bmatrix}x_2\\ y_2\end{bmatrix}\) and \( \overrightarrow{u}_3=\begin{bmatrix}x_3\\ y_3\end{bmatrix}\) .
Then the following assertions hold:
\( \overrightarrow{u}_1+\overrightarrow{u}_2=\overrightarrow{u}_2+\overrightarrow{u}_1\) : the addition of vectors is commutative.
\( (\overrightarrow{u}_1+\overrightarrow{u}_2)+\overrightarrow{u}_3=\overrightarrow{u}_1+(\overrightarrow{u}_2+\overrightarrow{u}_3)\) : the addition of vectors is associative.
\( \overrightarrow{u_{1}}+\overrightarrow{0}=\overrightarrow{0}+\overrightarrow{u_{1}} =\overrightarrow{u_{1}}\) : the null vector \( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) is neutral for the addition of vectors.
The opposite \( -\overrightarrow{u_{1}}=\begin{bmatrix}-x_{1}\\ {-y_{1}}\end{bmatrix}\) of \( \overrightarrow{u_{1}}\) is its reciprocal for the addition of vectors: \( \overrightarrow{u_{1}}+(-\overrightarrow{u_{1}}) =(-\overrightarrow{u_{1}})+\overrightarrow{u_{1}}=\overrightarrow{0}\)
Proof
Assume \( (x_1,y_1,x_2,y_2,x_3,y_3)\in\mathbb{R}^6\) are real numbers, and consider the three column vectors with 2 elements, \( \overrightarrow{u}_1=\begin{bmatrix}x_1\\ y_1\end{bmatrix}\) , \( \overrightarrow{u}_2=\begin{bmatrix}x_2\\ y_2\end{bmatrix}\) and \( \overrightarrow{u}_3=\begin{bmatrix}x_3\\ y_3\end{bmatrix}\) .
Commutativity The addition of vectors is commutative because it is so for the addition of real numbers.
Indeed:
(1)
Associativity The addition of vectors is associative because it is so for the addition of real numbers.
Indeed:
(2)
Neutral Element The null vector \( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) is neutral for the addition of vectors because \( 0\) is neutral for the addition of real numbers.
Indeed:
(3)
and
(4)
Opposite of a Vector The opposite \( -\overrightarrow{u_{1}}=\begin{bmatrix}-x_{1}\\ {-y_{1}}\end{bmatrix}\) of \( \overrightarrow{u_{1}}\) is its reciprocal for the addition of vectors because the opposite of a real number is its reciprocal for the addition of real numbers.
Indeed:
(5)
and
(6)
Theorem 11
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and its opposite \( -\overrightarrow{u}=\begin{bmatrix}-x\\ {-y}\end{bmatrix}\) .
Then the following assertions hold:
\( -\overrightarrow{u}\) is aligned with \( \overrightarrow{u}\) ,
\( -\overrightarrow{u}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) ,
and \( \left\|-\overrightarrow{u}\right\|=\left\|\overrightarrow{u}\right\|\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and its opposite \( -\overrightarrow{u}=\begin{bmatrix}-x\\ {-y}\end{bmatrix}\) .
Then \( -\overrightarrow{u}=\begin{bmatrix}-x\\ -y\end{bmatrix} =\begin{bmatrix}(-1)x\\ (-1)y\end{bmatrix}\) , so that the abscissa and ordinate of \( -\overrightarrow{u}\) are proportionnal to the abscissa and ordinate of \( \overrightarrow{u}\) .
Consequently, \( -\overrightarrow{u}\) is aligned with \( \overrightarrow{u}\) .
Moreover, the abscissa and ordinate of \( -\overrightarrow{u}\) have the oppposite signs than the abscissa and ordinate of \( \overrightarrow{u}\) respectively.
Consequently, \( -\overrightarrow{u}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) .
Endly, \( \left\|-\overrightarrow{u}\right\| =\sqrt{(-x)^{2}+(-y)^{2}}=\sqrt{x^{2}+y^{2}} =\left\|\overrightarrow{u}\right\|\) .
The set \( \mathbb{P}\) of column vectors with two real elements defines the canonical vector plane.
The addition and subtraction are defined in as the operations element by element.
Then, because of the different properties of the addition of vectors, \( (\mathbb{P},+)\) is a commutative group.
Two vectors \( \overrightarrow{u}\) and \( \overrightarrow{v}\) in \( \mathbb{P}\) may be represented as “arrows” in the plane, with the end of \( \overrightarrow{u}\) coinciding with the beginning of \( \overrightarrow{v}\) .
We may then build a parallelogram with the beginning of a new version of \( \overrightarrow{v}\) coinciding with the beginning of \( \overrightarrow{u}\) , and the end of \( \overrightarrow{v}\) coinciding with the beginning of a new version of \( \overrightarrow{u}\) .
Then the diagonal of the parallelogram starting from the beginnings of the two vectors represents the sum \( \overrightarrow{u}+\overrightarrow{v}=\overrightarrow{v}+\overrightarrow{u}\) of the two vectors.
Proof (of the last assertion)
Assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) .
Consider the vector \( \overrightarrow{w}=\begin{bmatrix}r\\ s\end{bmatrix}\) built as on the figure 12.
Let’s prove geometrically that \( r=x+z\) , considering different cases of signs of \( x\) and \( z\) .
The proof that \( s=y+t\) is made similarly, with the \( y\) axis instead of the \( x\) axis.
The different cases are the followings:
\( x\) or \( z\) equal to \( 0\) .
\( x\) and \( z\) positive.
\( x\) and \( z\) of different signs.
\( x\) and \( z\) negative.
Let’s make the proof.
Assume that \( x\) or \( z\) is equal to \( 0\) .
As the construction of the figure 12 is commutative, we may assume that \( x=0\) .
Assume that \( z=0\) .
If, starting from the origin, we draw end-to-beginning the vertical vectors \( \overrightarrow{u}=\begin{bmatrix}0\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}0\\ t\end{bmatrix}\) , then the resulting vector \( \overrightarrow{w}=\begin{bmatrix}r\\ s\end{bmatrix}\) is vertical as well.
Consequently, \( s=0\) , that is equal to \( x+z\) .
Assume that \( z>0\) .
Starting from the origin, we draw end-to-beginning the vertical vector
\( \overrightarrow{u}=\begin{bmatrix}0\\ y\end{bmatrix}\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) .
Then the distances to the origin of the projections on the \( x\) axis of the ends of the vector \( \overrightarrow{v}\) and the resulting vector \( \overrightarrow{w}\) are both equal to \( z\) .
Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the right of the origin, so that \( s=|s|=z\) .
Consequently, \( s=z\) , that is equal to \( x+z\) because \( x=0\) .
Assume that \( z<0\) .
Starting from the origin, we draw end-to-beginning the vertical vector
\( \overrightarrow{u}=\begin{bmatrix}0\\ y\end{bmatrix}\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) .
Then the distances to the origin of the projections on the \( x\) axis of the ends of the vector \( \overrightarrow{v}\) and the resulting vector \( \overrightarrow{w}\) are both equal to \( |z|\) , so that \( |s|=|z|=-z\) .
Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the left of the origin, so that \( s=-|s|=-(-z)=z\) .
Consequently, \( s=z\) , that is equal to \( x+z\) because \( x=0\) .
Assume that \( x\) and \( z\) are both positive.
Starting from the origin, we draw end-to-beginning the vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z>0\) .
Then the distances between the projections on the \( x\) axis of the ends of the different vectors are the following.
For the vector \( \overrightarrow{u}\) , it is \( x\) .
For the vector \( \overrightarrow{v}\) , it is \( z\) .
And for the vector \( \overrightarrow{w}\) , it is \( x+z\) , so that \( |s|=x+z\) .
Moreover, the projection of the end of the vector \( \overrightarrow{w}\) on the \( x\) axis is to the right of the origin, so that \( s=|s|=x+z\) .
Assume that \( x\) and \( z\) are of different signs.
As the construction of the figure 12 is commutative, we may assume that \( x>0\) and thus \( z<0\) .
Assume that \( x>|z|\) .
Starting from the origin, we draw end-to-beginnig the vector
\( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) and \( x>|z|\) .
Then the distances tbetween the projections on the \( x\) axis of the beginnigs and the ends of the different vectors are the following.
For the vector \( \overrightarrow{u}\) , it is \( x\) .
For the vector \( \overrightarrow{v}\) , it is \( |z|=-z\) .
And for the vector \( \overrightarrow{w}\) , it is \( x-|z|=x+z\) , so that \( |s|=x+z\) .
Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the right of the origin, so that \( s=|s|=x+z\) .
Assume that \( x=|z|\) , so that \( z=-x\) .
Starting from the origin, we draw end-to-beginnig the vector
\( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) and \( z=-x\) .
Then the distances between the projections on the \( x\) axis of the beginnigs and the ends of the vectors \( \overrightarrow{u}\) and \( \overrightarrow{v}\) , is \( x=|z|=-z\) .
Moreover, the vector \( \overrightarrow{w}\) is along the \( y\) axis, so that \( s=0=x+z\) .
Consequently, \( s=x+z\) .
Assume that \( x<|z|\) .
Starting from the origin, we draw end-to-beginnig the vector
\( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) and \( x>|z|\) .
Then the distances tbetween the projections on the \( x\) axis of the beginnigs and the ends of the different vectors are the following.
For the vector \( \overrightarrow{u}\) , it is \( x\) .
For the vector \( \overrightarrow{v}\) , it is \( |z|=-z\) .
And for the vector \( \overrightarrow{w}\) , it is \( |z|-x=-(x+z)\) , so that \( |s|=-(x+z)\) .
Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the left of the origin, so that \( s=-|s|=x+z\) .
Assume that \( x\) and \( z\) are both negative.
Starting from the origin, we draw end-to-beginnig the vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x<0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) .
Then the distances tbetween the projections on the \( x\) axis of the beginnigs and the ends of the different vectors are the following.
For the vector \( \overrightarrow{u}\) , it is \( |x|=-x\) .
For the vector \( \overrightarrow{v}\) , it is \( |z|=-z\) .
And for the vector \( \overrightarrow{w}\) , it is \( |x|+|z|=-(x+z)\) , so that \( |s|=-(x+z)\) .
Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the left of the origin, so that \( s=-|s|=x+z\) .
Theorem 12
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.
Then the following assertion holds:
(7)
Proof
Subtract a vector is add its opposite because subtract a real number is add its opposite.
Indeed, if we assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and if we consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , then:
(8)
Theorem 13
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.
Then the following assertions hold:
\( (\overrightarrow{u}+\overrightarrow{v})-\overrightarrow{v}=\overrightarrow{u}\)
\( (\overrightarrow{u}-\overrightarrow{v})+\overrightarrow{v}=\overrightarrow{u}\)
Corollary 1
Assume \( (\overrightarrow{u},\overrightarrow{v},\overrightarrow{w})\in\mathbb{P}^3\) are column vectors with two real elements.
Then the following equivalences hold:
\( \overrightarrow{w}=\overrightarrow{u}+\overrightarrow{v} \Leftrightarrow \overrightarrow{u}=\overrightarrow{w}-\overrightarrow{v}\)
\( \overrightarrow{w}=\overrightarrow{u}-\overrightarrow{v} \Leftrightarrow \overrightarrow{u}=\overrightarrow{w}+\overrightarrow{v}\)
\( \overrightarrow{w}=\overrightarrow{v}-\overrightarrow{u} \Leftrightarrow \overrightarrow{u}=\overrightarrow{v}-\overrightarrow{w}\)
Proof (of theorem 13)
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.
Then, because of the followint facts:
subtract a vector is add its opposite,
the addition of vectors is associative,
the opposite of a vector is its reciprocal for the addition of vectors,
and the null vector is neutral for the addition of vectors,
the following calculations may be performed:
\( (\overrightarrow{u}+\overrightarrow{v})-\overrightarrow{v} =(\overrightarrow{u}+\overrightarrow{v})+(-\overrightarrow{v}) =\overrightarrow{u}+(\overrightarrow{v}+(-\overrightarrow{v})) =\overrightarrow{u}+\overrightarrow{0} =\overrightarrow{u}\)
\( (\overrightarrow{u}-\overrightarrow{v})+\overrightarrow{v} =(\overrightarrow{u}+(-\overrightarrow{v}))+\overrightarrow{v} =\overrightarrow{u}+((-\overrightarrow{v})+\overrightarrow{v}) =\overrightarrow{u}+\overrightarrow{0} =\overrightarrow{u}\)
Proof (of corrollary 1)
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.
If we subtract \( \overrightarrow{v}\) to both members of the equality \( \overrightarrow{w}=\overrightarrow{u}+\overrightarrow{v}\) , we see that it is equivalent to: \( \overrightarrow{w}-\overrightarrow{v}=(\overrightarrow{u}+\overrightarrow{v})-\overrightarrow{v}=\overrightarrow{u}\) , because of theorem 13.
If we add \( \overrightarrow{v}\) to both members of the equality \( \overrightarrow{w}=\overrightarrow{u}-\overrightarrow{v}\) , we see that it is equivalent to: \( \overrightarrow{w}+\overrightarrow{v}=(\overrightarrow{u}-\overrightarrow{v})+\overrightarrow{v}=\overrightarrow{u}\) , because of theorem 13.
If we add \( \overrightarrow{u}\) to both members of the equality \( \overrightarrow{w}=\overrightarrow{v}-\overrightarrow{u}\) , we see that it is equivalent to: \( \overrightarrow{w}+\overrightarrow{u}=(\overrightarrow{v}-\overrightarrow{u})+\overrightarrow{u}=\overrightarrow{v}\) , because of theorem 13.
Because the addition of vectors is commutative, the last equality is equivalent to: \( \overrightarrow{v}=\overrightarrow{u}+\overrightarrow{w}\) .
And if we exchange the roles of \( \overrightarrow{v}\) and \( \overrightarrow{w}\) in item (I), we see that the last equality is equivalent to: \( \overrightarrow{u}=\overrightarrow{v}-\overrightarrow{w}\)
Definition 2
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Assume \( \lambda\in\mathbb{R}\) is a real number.
Then we multiply and divide the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) the following way:
\( \lambda\overrightarrow{u}=\begin{bmatrix}\lambda x\\ \lambda y\end{bmatrix}\) is the product element by element of \( \overrightarrow{u}\) by \( \lambda\) ,
and, provided \( \lambda\neq 0\) , \( \frac{\overrightarrow{u}}{\lambda}=\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix}\) is the quotient element by element of \( \overrightarrow{u}\) by \( \lambda\) .
Theorem 14
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .
Then the following assertions hold:
If we multiply the vector \( \overrightarrow{u}\) by the scalar \( 1\) , we obtain the vector \( \overrightarrow{u}\) : \( 1.\overrightarrow{u}=\overrightarrow{u}\) .
If we divide the vector \( \overrightarrow{u}\) by the scalar \( 1\) , we obtain the vector \( \overrightarrow{u}\) : \( \frac{\overrightarrow{u}}{1}=\overrightarrow{u}\) .
Proof
That theorem will be proved regardless the fact that the vector isn’t the null vector.
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Then we may perform the following calculations:
\( 1.\overrightarrow{u}=\begin{bmatrix}1\times x\\ 1\times y\end{bmatrix} =\begin{bmatrix}x\\ y\end{bmatrix}=\overrightarrow{u}\) ,
and \( \frac{\overrightarrow{u}}{1}=\begin{bmatrix}\frac{x}{1}\\ \frac{y}{1}\end{bmatrix} =\begin{bmatrix}x\\ y\end{bmatrix}=\overrightarrow{u}\) .
Theorem 15
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .
Then the following assertions hold:
If we multiply the vector \( \overrightarrow{u}\) by the scalar \( -1\) , we obtain the opposite \( -\overrightarrow{u}\) of the vector \( \overrightarrow{u}\) : \( (-1).\overrightarrow{u}=-\overrightarrow{u}\) .
If we divide the vector \( \overrightarrow{u}\) by the scalar \( -1\) , we obtain the opposite \( -\overrightarrow{u}\) of the vector \( \overrightarrow{u}\) : \( \frac{\overrightarrow{u}}{-1}=-\overrightarrow{u}\) .
Proof
That theorem will be proved regardless the fact that the vector isn’t the null vector.
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Then we may perform the following calculations:
\( (-1).\overrightarrow{u}=\begin{bmatrix}(-1)\times x\\
(-1)\times y\end{bmatrix}
=\begin{bmatrix}-x
-y\end{bmatrix}=-\overrightarrow{u}\) ,
and \( \frac{\overrightarrow{u}}{-1}=\begin{bmatrix}\frac{x}{-1}\\ \frac{y}{-1}\end{bmatrix} =\begin{bmatrix}-x\\ {-y}\end{bmatrix}=-\overrightarrow{u}\) .
Theorem 16
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector
\( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as:
\( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}\) is in the same direction as \( \overrightarrow{u}\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Assume \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) , and consider the column vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .
Then \( \overrightarrow{v}=\begin{bmatrix}\lambda x\\ \lambda y\end{bmatrix}\) , so that the abscissa and ordinate of \( \overrightarrow{v}\) are proportionnal to the abscissa and ordinate of \( \overrightarrow{u}\) .
Consequently, \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) .
Moreover, because \( \lambda>0\) , the abscissa and ordinate of \( \overrightarrow{v}\) have the same signs as the abscissa and ordinate of \( \overrightarrow{u}\) respectively.
Consequently, \( \overrightarrow{v}\) is in the same direction as \( \overrightarrow{u}\) .
Theorem 17
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) .
Then, if we divide the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) such as:
\( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,
\( \overrightarrow{v}\) is in the same direction as \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}=\frac{1}{\lambda}\overrightarrow{u}\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Assume \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) , and consider the column vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) .
Then \( \overrightarrow{v} =\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix} =\begin{bmatrix}\frac{1}{\lambda}x\\ \frac{1}{\lambda}y\end{bmatrix} =\frac{1}{\lambda}\overrightarrow{u}\) , which proves the last item of the theorem.
Moreover, as \( \frac{1}{\lambda}>0\) , the first two items derive from theorem 16
Theorem 18
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector
\( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as:
\( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,
\( \overrightarrow{v}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}\) is the opposite \( -\left|\lambda\right|\overrightarrow{u}\) of the product of \( \overrightarrow{u}\) by the absolute value of \( \lambda\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Assume \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) , and consider the column vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .
Then \( \overrightarrow{v}=\begin{bmatrix}\lambda x\\ \lambda y\end{bmatrix}\) , so that the abscissa and ordinate of \( \overrightarrow{v}\) are proportionnal to the abscissa and ordinate of \( \overrightarrow{u}\) .
Consequently, \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) .
Moreover, because \( \lambda<0\) , the abscissa and ordinate of \( \overrightarrow{v}\) have the oppposite signs than the abscissa and ordinate of \( \overrightarrow{u}\) respectively.
Consequently, \( \overrightarrow{v}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) .
Endly, as \( \lambda<0\) , it is the opposite \( -\left|\lambda\right|\) of the absolute value of \( \lambda\) .
Consequently, the following calculations may be performed:
\( \overrightarrow{v} =\begin{bmatrix}(-\left|\lambda\right|) x\\ (-\left|\lambda\right|) y\end{bmatrix} =\begin{bmatrix}-\left|\lambda\right| x\\ -\left|\lambda\right| y\end{bmatrix} =-\left|\lambda\right|\begin{bmatrix} x\\ y\end{bmatrix} =-\left|\lambda\right|\overrightarrow{u}\)
Theorem 19
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) .
Then, if we divide the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) such as:
\( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,
\( \overrightarrow{v}\) is in the opposite direction of the direction of \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}\) is the opposite \( -\frac{1}{\left|\lambda\right|}\overrightarrow{u}\) of the product of \( \overrightarrow{u}\) by the inverse of the absolute value of \( \lambda\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Assume \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) , and consider the column vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) .
Then \( \overrightarrow{v} =\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix} =\begin{bmatrix}\frac{1}{\lambda}x\\ \frac{1}{\lambda}y\end{bmatrix} =\frac{1}{\lambda}\overrightarrow{u}\) .
Consequently, as \( \frac{1}{\lambda}<0\) , the first two items derive from theorem 18.
Endly, \( \lambda=-|\lambda|\) the opposite of the absolute value of \( \lambda\) , and its inverse \( \frac{1}{\lambda}\) is the opposite \( -\frac{1}{\left|\lambda\right|}\) of the inverse of the absolute value of \( \lambda\) .
Consequently, the following calculations may be performed:
\( \overrightarrow{v} =\begin{bmatrix}-\left(\frac{1}{\left|\lambda\right|}\right)x\\ {-}\left(\frac{1}{\left|\lambda\right|}\right)y\end{bmatrix} =\begin{bmatrix}-\frac{1}{\left|\lambda\right|}x\\ {-}\frac{1}{\left|\lambda\right|}y\end{bmatrix} =-\frac{1}{\left|\lambda\right|}\overrightarrow{u}\)
Theorem 20
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements.
Then the following assertions hold:
If we multiply the vector \( \overrightarrow{u}\) by the scalar \( 0\) , we obtain the null vector \( \overrightarrow{0}\) : \( (0).\overrightarrow{u}=\overrightarrow{0}\) .
The vector \( \overrightarrow{u}\) can not be divided by the scalar \( 0\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Then we may perform the following calculations:
\( 0.\overrightarrow{u}=\begin{bmatrix}0\times x\\ 0\times y\end{bmatrix} =\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\)
The vector \( \overrightarrow{u}\) can not be divided by the scalar \( 0\) because its coordinates can not be divided by \( 0\) .
Theorem 21
Assume \( \lambda\in\mathbb{R}\) is a real number.
Then the following assertions hold:
If we multiply the null vector \( \overrightarrow{0}\) by the scalar \( \lambda\) , we obtain the null vector \( \overrightarrow{0}\) : \( (0).\overrightarrow{u}=\overrightarrow{0}\) .
If \( \lambda\ne 0\), and if we divide the null vector \( \overrightarrow{0}\) by the non zero scalar \( \lambda\) , we obtain the null vector \( \overrightarrow{0}\) : \( \frac{\overrightarrow{0}}{\lambda}=\overrightarrow{0}\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Then we may perform the following calculations:
\( \lambda.\overrightarrow{0}=\begin{bmatrix}\lambda\times 0\\ \lambda\times 0\end{bmatrix} =\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\) ,
Assume \( \lambda\ne 0\).
Then \( \frac{\overrightarrow{0}}{\lambda}=\begin{bmatrix}\frac{0}{\lambda}\\ \frac{0}{\lambda}\end{bmatrix} =\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\) .
Theorem 22
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Consider the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\) .
Then the coordinates of \( \overrightarrow{u}\) in the canonical base are:
its abscissa \( x\) ,
and its ordinate \( y\) .
Moreover, the following identity holds:
\( \overrightarrow{u}=x\overrightarrow{i}+y\overrightarrow{j}\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Consider the canonical base of the vector plane \( \mathbb{P}\) :
\( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\) and
\( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\) .
Then, as we have seen in the paragraph 3, the abscissa of the vector \( \overrightarrow{u}\) in the
canonical base is its first element \( x\) and its ordinate is its
second element \( y\) .
Moreover, the following calculations may be performed:
\( x\overrightarrow{i}+y\overrightarrow{j} =x\begin{bmatrix}1\\ 0\end{bmatrix}+y\begin{bmatrix}0\\ 1\end{bmatrix} =\begin{bmatrix}x\times 1+y\times 0\\ x\times 0+y\times 1\end{bmatrix} =\begin{bmatrix}x\\ y\end{bmatrix} =\overrightarrow{u}\)
Theorem 23
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .
Then the following assertions hold:
\( \lambda\frac{\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) ,
and \( \frac{\lambda\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) .
Lemma 1
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .
Then the following assertion holds:
\( \frac{\overrightarrow{u}}{\lambda}=\frac{1}{\lambda}\overrightarrow{u}\) .
Lemma 2
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^{2}\) is are real numbers.
Then the following associativity property holds:
\( \alpha(\beta\overrightarrow{u})=(\alpha\beta)\overrightarrow{u}\)
Corollary 2
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .
Then the following equivalences hold:
\( \overrightarrow{v}=\lambda\overrightarrow{u} \Leftrightarrow \overrightarrow{u}=\frac{\overrightarrow{v}}{\lambda}\) ,
and \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda} \Leftrightarrow \overrightarrow{u}=\lambda\overrightarrow{v}\)
Proof (of the lemma 1)
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector
\( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Assume \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .
Then we may perform the following calculations:
\( \frac{\overrightarrow{u}}{\lambda} =\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix} =\begin{bmatrix}\frac{1}{\lambda}x\\ \frac{1}{\lambda}y\end{bmatrix} =\frac{1}{\lambda}\begin{bmatrix}x\\ y\end{bmatrix} =\frac{1}{\lambda}\overrightarrow{u}\)
Proof (of the lemma 2)
The associativity property is a consequence of the associativity of the multiplication of real numbers.
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector
\( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.
Assume \( (\alpha,\beta)\in\mathbb{R}^{2}\) is are real numbers.
Then we may perform the following calculations:
\( \alpha(\beta\overrightarrow{u}) =\alpha\begin{bmatrix}\beta x\\ \beta y\end{bmatrix} =\begin{bmatrix}\alpha(\beta x)\\ \alpha(\beta y)\end{bmatrix} =\begin{bmatrix}(\alpha\beta) x)\\ (\alpha\beta) y)\end{bmatrix} =(\alpha\beta)\begin{bmatrix} x)\\ y)\end{bmatrix} =(\alpha\beta)\overrightarrow{u}\)
Proof (of the theorem 23)
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements,
and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .
Then we may apply the lemmas 1 et 2 in the following calculations:
\( \lambda\frac{\overrightarrow{u}}{\lambda} =\lambda\left( \frac{1}{\lambda}\overrightarrow{u} \right) =\left( \lambda\frac{1}{\lambda} \right)\overrightarrow{u} =1.\overrightarrow{u} =\overrightarrow{u}\) ,
and \( \frac{\lambda\overrightarrow{u}}{\lambda} =\frac{1}{\lambda} (\lambda\overrightarrow{u}) =\left( \frac{1}{\lambda} \lambda \right)\overrightarrow{u} =1.\overrightarrow{u} =\overrightarrow{u}\) .
Proof (of the corollary 2)
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements,
and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .
Then, if we divide by \( \lambda\) the two sides of the equality \( \overrightarrow{v}=\lambda\overrightarrow{u}\) ,
we see the it is equivalent to:
\( \frac{\overrightarrow{v}}{\lambda}=\frac{\lambda\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) .
And if we multiply by \( \lambda\) the two sides of the equality \( \frac{\lambda\overrightarrow{u}}{\lambda}\) , we see the it is equivalent to: \( \lambda\overrightarrow{v}=\lambda\frac{\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) .
We shall now mix the addition of vectors and the multiplication of vectors by scalars to build a new kind of algbraic structure, the structure of vector space.
Theorem 24
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.
Then the following assertions hold:
First distributivity law: \( \alpha(\overrightarrow{u}+\overrightarrow{v})=\alpha\overrightarrow{u}+\alpha\overrightarrow{v}\)
Second distributivity law: \( (\alpha+\beta)\overrightarrow{u}=\alpha\overrightarrow{u}+\beta\overrightarrow{u}\)
Associativity law: \( \alpha(\beta\overrightarrow{u})=(\alpha\beta)\overrightarrow{u}\)
Proof
Assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) .
Assume \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.
First distributivity law
The first distributivity law is a consequence of the
distributivity of the multiplication on the addition in
\( \mathbb{R}\) .
Indeed, we may perform the following calculations:
\( \alpha(\overrightarrow{u}+\overrightarrow{v})
=\alpha\left(\begin{bmatrix}x\\ y\end{bmatrix}+\begin{bmatrix}z\\ t\end{bmatrix}\right)
=\alpha\begin{bmatrix}x+z\\ y+t\end{bmatrix}
=\begin{bmatrix}\alpha(x+z)\\ \alpha(y+t)\end{bmatrix}
=\begin{bmatrix}\alpha x+\alpha z\\ \alpha y+\alpha t\end{bmatrix}\)
\( =\begin{bmatrix}\alpha x\\ \alpha y\end{bmatrix}+\begin{bmatrix}\alpha z\\ \alpha t\end{bmatrix} =\alpha\begin{bmatrix} x\\ y\end{bmatrix}+\alpha\begin{bmatrix}z\\ t\end{bmatrix} =\alpha\overrightarrow{u}+\alpha\overrightarrow{v}\)
Second distributivity law
The second distributivity law is a consequence of the
distributivity of the multiplication on the addition in
\( \mathbb{R}\) as well.
Indeed, we may perform the following calculations:
\( (\alpha+\beta)\overrightarrow{u}
=(\alpha+\beta)\begin{bmatrix}x\\
y\end{bmatrix}
=\begin{bmatrix}(\alpha+\beta)x\\
(\alpha+\beta)y\end{bmatrix}
=\begin{bmatrix}\alpha x+\beta x\\
\alpha y+\beta y\end{bmatrix}
=\begin{bmatrix}\alpha x\\
\alpha y\end{bmatrix}
+\begin{bmatrix}\beta x\\
\beta y\end{bmatrix}\)
\( =\alpha\begin{bmatrix} x\\ y\end{bmatrix} +\beta \begin{bmatrix} x\\ y\end{bmatrix} =\alpha\overrightarrow{u}+\beta\overrightarrow{u}\)
The associativity law is already stated in the lemma 2.
Because of the following properties:
\( (\mathbb{P},+)\) is a commutative group,
the two distributivity laws hold in \( \mathbb{P}\) ,
and the associativity law holds in \( \mathbb{P}\) ,
\( (\mathbb{P},+,.)\) has a structure of vector space.
And because of the following facts:
\( (\mathbb{P},+,\cdot)\) has a structure of vector space,
and the canonical base of \( \mathbb{P}\) is made of \( 2\) vectors,
The vector space is said to be of dimension \( 2\) , or to be a vector plane, which justifies the fact that we call it “the vector plane \( \mathbb{P}\) ”.
Theorem 25
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.
Then the following assertions hold:
Signs law: \( (-\alpha)\overrightarrow{u}=\alpha(-\overrightarrow{u})=-\alpha\overrightarrow{u}\)
First distributivity law: \( \alpha(\overrightarrow{u}-\overrightarrow{v})=\alpha\overrightarrow{u}-\alpha\overrightarrow{v}\)
Second distributivity law: \( (\alpha-\beta)\overrightarrow{u}=\alpha\overrightarrow{u}-\beta\overrightarrow{u}\)
Proof
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.
Signs law It is a consequence of the associativity law and of the following facts:
\( -\alpha=(-1)\times\alpha=\alpha\times(-1)\) ,
\( -\overrightarrow{u}=(-1)\overrightarrow{u}\) ,
and \( -\alpha\overrightarrow{u}=(-1)(\alpha\overrightarrow{u})\) .
Indeed, we may perform the following calculations:
\( (-\alpha)\overrightarrow{u}
=(\alpha\times(-1))\overrightarrow{u}
=\alpha((-1)\overrightarrow{u})
=\alpha(-\overrightarrow{u})\) ,
and
\( (-\alpha)\overrightarrow{u} =((-1)\times\alpha)\overrightarrow{u} =(-1)(\alpha\overrightarrow{u}) =-\alpha\overrightarrow{u}\) .
First distributivity law
The first distributivity law is a consequence of the
signs law and the fact that subtract a vector is add its opposite.
Indeed, we may perform the following calculations:
\( \alpha(\overrightarrow{u}-\overrightarrow{v}) =\alpha(\overrightarrow{u}+(-\overrightarrow{v})) =\alpha\overrightarrow{u}+\alpha(-\overrightarrow{v}) =\alpha\overrightarrow{u}+(-\alpha\overrightarrow{v}) =\alpha\overrightarrow{u}-\alpha\overrightarrow{v}\)
Second distributivity law
The second distributivity law is a consequence of the
signs law and the fact that subtract a real number is add its opposite.
Indeed, we may perform the following calculations:
\( (\alpha-\beta)\overrightarrow{u} =(\alpha+(-\beta))\overrightarrow{u} =\alpha\overrightarrow{u}+(-\beta)\overrightarrow{u} =\alpha\overrightarrow{u}+(-\beta\overrightarrow{u}) =\alpha\overrightarrow{u}-\beta\overrightarrow{u}\)
Theorem 26
Assume \( \lambda\in\mathbb{R}\) is a real number, and consider the homothety of factor \( \lambda\) in \( \mathbb{P}\) :
(9)
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\)
are column vectors with two real elements, and
\( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) is a real number.
Then the following assertions hold:
\( h_{\lambda}(\overrightarrow{u}+\overrightarrow{v}) =h_{\lambda}(\overrightarrow{u})+h_{\lambda}(\overrightarrow{v})\) ,
and \( h_{\lambda}(\alpha \overrightarrow{u})=\alpha h_{\lambda}(\overrightarrow{u})\) .
Because if these two properties, we say that:
The homotheties are linear mappings in \( \mathbb{P}\) .
Proof (of theorem 26)
Assume \( \lambda\in\mathbb{R}\) is a real number, and consider the homothety of factor \( \lambda\) in \( \mathbb{P}\) :
(10)
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\)
are column vectors with two real elements, and
\( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) is a real number.
Then the following calculations may be performed:
Thanks to the first distributivity law:
\( h_{\lambda}(\overrightarrow{u}+\overrightarrow{v}) =\lambda(\overrightarrow{u}+\overrightarrow{v}) =\lambda\overrightarrow{u}+\lambda\overrightarrow{v} =h_{\lambda}(\overrightarrow{u})+h_{\lambda}(\overrightarrow{v})\) .
And thanks to the associativity law and the commutativity of the multiplication of real numbers:
\( h_{\lambda}(\alpha \overrightarrow{u}) =(\lambda\alpha) \overrightarrow{u} =(\alpha\lambda) \overrightarrow{u} =\alpha(\lambda \overrightarrow{u}) =\alpha h_{\lambda}(\overrightarrow{u})\) .
Theorem 27
Assume \( (\lambda,\mu)\in\mathbb{R}^2\) are real numbers, and consider the
homotheties of factors \( \lambda\) and \( \mu\) in \( \mathbb{P}\) :
(11)
and
(12)
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements.
Then the following assertions hold:
\( h_{\lambda+\mu}(\overrightarrow{u})=h_{\lambda} (\overrightarrow{u})+h_{\mu}(\overrightarrow{u})\) ,
and \( h_{\lambda\mu}(\overrightarrow{u})=h_{\lambda}(h_{\mu}(\overrightarrow{u}))\) .
We say that \( h_{\lambda+\mu}=h_{\lambda} +h_{\mu}\) and \( h_{\lambda\mu}=h_{\lambda}\circ h_{\mu}\) .
And we may deduce from the theorem 27 que:
The set of the homotheties in \( \mathbb{P}\) is, with the addition \( +\) of applications and the composition \( \circ\) of applications, a commutative field isomorphic to the commutative field \( (\mathbb{R},+,\times)\)
Proof (of theorem 27)
Assume \( (\lambda,\mu)\in\mathbb{R}^2\) are real numbers, and consider the homotheties of factors \( \lambda\) and \( \mu\) in \( \mathbb{P}\) :
(13)
and
(14)
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements.
Then the following calculations may be performed:
Thanks to the second distributivity law:
\( h_{\lambda+\mu}(\overrightarrow{u}) =(\lambda+\mu)\overrightarrow{u} =\lambda\overrightarrow{u}+\mu\overrightarrow{u} =h_{\lambda} (\overrightarrow{u})+h_{\mu}(\overrightarrow{u})\) .
And thanks to the associativity law:
\( h_{\lambda\mu}(\overrightarrow{u}) =(\lambda\mu)(\overrightarrow{u}) =\lambda(\mu(\overrightarrow{u})) =h_{\lambda}(h_{\mu}(\overrightarrow{u}))\) .
We have built the canonical vector plane \( \mathbb{P}\) , with (column)
vectors that we can not only draw, but also add and subtract together,
and multiply and divide by scalars, with useful properties that
confer to \( (\mathbb{P},+,\cdot)\) a structure of vector space.
And we discovered our first linear mappings in \( \mathbb{P}\) , the homotheties.
I am normally hidden by the status bar