LaTex2Web logo

LaTeX2Web, a web authoring and publishing system

If you see this, something is wrong

Collapse and expand sections

To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.

Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.

Cross-references and related material

Generally speaking, anything that is blue is clickable.

Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.

Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.

Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.

Discussions

By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.

If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.

Publications

The blue button below that says "table of contents" is your tool to navigate in a publication.

The left arrow brings you to the previous document in the publication, and the right one brings you to the next. Both cycle over the publication list.

The middle button that says "table of contents" reveals the publication table of contents. This table is hierarchical structured. It has sections, and sections can be collapsed or expanded. If you are a registered user, you can save the layout of the table of contents.

Table of contents

First published on Saturday, Jul 6, 2024 and last modified on Thursday, Apr 10, 2025

The Canonical Vector Plane \( \mathbb{P}\)

Fabienne Chaplais Mathedu SAS

1 Introduction

In this paper, we shall discover a way to draw colum vectors with two real elements that will prove very useful to manipulate these vectors more synthetically, with a notation \( \overrightarrow{u}\) rather then the lengthy matrix notation.

With that notation, we shall develop many useful properties for the element by element addition, subtraction, multiplication by a scalar and division by a non zero scalar.

2 Let’s Draw some Vectors

2.1 The canonical base in the plane

The Canonical Base in the plane
Figure 1. The Canonical Base in the plane

In the plane, we define an origin and a canonical base as:

  • The origin \( O\) is the null vector \( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) .

  • The \( x\) axis is directed by the vector \( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\) (abscissa \( 1\) , ordinate \( 0\) ).

  • The \( y\) axis is directed by the vector \( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\) (abscissa \( 0\) , ordinate \( 1\) ).

The canonical base \( (\overrightarrow{i},\overrightarrow{j})\) is made of two column vectors that are graphically represented on the figure 1.

2.2 Position of Vectors in the Base

Position of some vectors in the Base
Figure 2. Position of some vectors in the Base

The abscissa and ordinates of the vectors drawn on figure 2 are described in the table 1 below.

Note that the abscissa of a column vector is its first element, and its ordinate is its second element.

Table 1 Abscissa and ordinates of some vectors in the canonical base
Vector Abscissa Ordinate Vector Abscissa Ordinate
\( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\)00
\( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\)\( 1\)\( 0\)\( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\)\( 0\)\( 1\)
\( \overrightarrow{v}_1=\begin{bmatrix}1\\ 1\end{bmatrix}\)\( 1\)\( 1\)\( \overrightarrow{v}_2=\begin{bmatrix}-1\\ 1\end{bmatrix}\)\( -1\)\( 1\)
\( \overrightarrow{v}_3=\begin{bmatrix}-1\\ 0\end{bmatrix}\)\( -1\)\( 0\)\( \overrightarrow{v}_4=\begin{bmatrix}-1\\ {-1}\end{bmatrix}\)\( -1\)\( -1\)
\( \overrightarrow{v}_5=\begin{bmatrix}0\\ {-1}\end{bmatrix}\)\( 0\)\( -1\)\( \overrightarrow{v}_6=\begin{bmatrix}1\\ {-1}\end{bmatrix}\)\( 1\)\( -1\)
\( \overrightarrow{u}_1=\begin{bmatrix}2\\ 0\end{bmatrix}\)\( 2\)\( 0\)\( \overrightarrow{u}_2=\begin{bmatrix}2\\ 1\end{bmatrix}\)\( 2\)\( 1\)
\( \overrightarrow{u}_3=\begin{bmatrix}2\\ 2\end{bmatrix}\)\( 2\)\( 2\)\( \overrightarrow{u}_4=\begin{bmatrix}1\\ 2\end{bmatrix}\)\( 1\)\( 2\)
\( \overrightarrow{u}_5=\begin{bmatrix}0\\ 2\end{bmatrix}\)\( 0\)\( 2\)\( \overrightarrow{u}_6=\begin{bmatrix}-1\\ 2\end{bmatrix}\)\( -1\)\( 2\)
\( \overrightarrow{u}_7=\begin{bmatrix}-2\\ 2\end{bmatrix}\)\( -2\)\( 2\)\( \overrightarrow{u}_8=\begin{bmatrix}-2\\ 1\end{bmatrix}\)\( -2\)\( 1\)
\( \overrightarrow{u}_9=\begin{bmatrix}-2\\ 0\end{bmatrix}\)\( -2\)\( 0\)\( \overrightarrow{u}_{10}=\begin{bmatrix}-2\\ {-1}\end{bmatrix}\)\( -2\)\( -1\)
\( \overrightarrow{u}_{11}=\begin{bmatrix}-2\\ {-2}\end{bmatrix}\)\( -2\)\( -2\)\( \overrightarrow{u}_{12}=\begin{bmatrix}-1\\ {-2}\end{bmatrix}\)\( -1\)\( -2\)
\( \overrightarrow{u}_{13}=\begin{bmatrix}0\\ {-2}\end{bmatrix}\)\( 0\)\( -2\)\( \overrightarrow{u}_{14}=\begin{bmatrix}1\\ {-2}\end{bmatrix}\)\( 1\)\( -2\)
\( \overrightarrow{u}_{15}=\begin{bmatrix}2\\ {-2}\end{bmatrix}\)\( 2\)\( -2\)\( \overrightarrow{u}_{16}=\begin{bmatrix}2\\ {-1}\end{bmatrix}\)\( 2\)\( -1\)

3 The Generic Vectors in the Plane

We shall give the geometrical meaning of the coordinates, abscissa and ordinate, of a vector in different configurations in respect to the axes.

3.1 Vector in the first quadrant of the plane

Example of vector in the first quadrant of the plane
Figure 3. Example of vector in the first quadrant of the plane

Theorem 1

For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in(\mathbb{R}_+^*)^2\) are real numbers such as \( a>0\) and \( b>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}=a\) is the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .

  • \( y_{u}=b\) is the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .

Proof

It is the definition of the coordinates of a vector, combined with the fact that both its abscissa and ordinate are positive.

3.2 Vector in the second quadrant of the plane

Example of vector in the second quadrant of the plane
Figure 4. Example of vector in the second quadrant of the plane

Theorem 2

For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in\mathbb{R}_-^*\times\mathbb{R}_+^*\) are real numbers such as \( a<0\) and \( b>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}=a\) is the opposite of the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .

  • \( y_{u}=b\) is the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .

Proof

It is the definition of the coordinates of a vector, combined with the fact that its abscissa is negative and its ordinate is positive.

3.3 Vector in the third quadrant of the plane

Example of vector in the third quadrant of the plane
Figure 5. Example of vector in the third quadrant of the plane

Theorem 3

For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in(\mathbb{R}_-^*)^2\) are real numbers such as \( a<0\) and \( b<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}=a\) is the opposite of the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .

  • \( y_{u}=b\) is the opposite of the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .

Proof

It is the definition of the coordinates of a vector, combined with the fact that both its abscissa and ordinate are negative.

3.4 Vector in the fourth quadrant of the plane

Example of vector in the fourth quadrant of the plane
Figure 6. Example of vector in the fourth quadrant of the plane

Theorem 4

For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ b\end{bmatrix}\) , where \( (a,b)\in\mathbb{R}_+^*\times\mathbb{R}_-^*\) are real numbers such as \( a>0\) and \( b<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}=a\) is the distance between the origin and the projection on the \( x\) axis along the \( y\) axis of the end of the vector \( \overrightarrow{u}\) .

  • \( y_{u}=b\) is the opposite of the distance between the origin and the projection on the \( y\) axis along the \( x\) axis of the end of the vector \( \overrightarrow{u}\) .

Proof

It is the definition of the coordinates of a vector, combined with the fact that its abscissa is positive and its ordinate is negative.

3.5 Vector positively along the \( x\) axis

Example of vector positively along the x) axis
Figure 7. Example of vector positively along the \( x\) axis

Theorem 5

For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ 0\end{bmatrix}\) , where \( a\in\mathbb{R}_+^*\) is a real number such as \( a>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}=a\) is the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the length of the vector \( \overrightarrow{u}\) .

  • \( y_{u}\) is equal to \( 0\) .

Proof

It is the definition of the coordinates of a vector aligned with the \( x\) axis, combined with the fact that its abscissa is positive.

3.6 Vector positively along the \( y\) axis

Example of vector positively along the y) axis
Figure 8. Example of vector positively along the \( y\) axis

Theorem 6

For a vector \( \overrightarrow{u}=\begin{bmatrix}0\\ b\end{bmatrix}\) , where \( b\in\mathbb{R}_+^*\) is a real number such as \( b>0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}\) is equal to \( 0\)

  • \( y_{u}=b\) is the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the length of the vector \( \overrightarrow{u}\) . .

Proof

It is the definition of the coordinates of a vector aligned with the \( y\) axis, combined with the fact that its ordinate is positive.

3.7 Vector negatively along the \( x\) axis

Example of vector negatively along the x) axis
Figure 9. Example of vector negatively along the \( x\) axis

Theorem 7

For a vector \( \overrightarrow{u}=\begin{bmatrix}a\\ 0\end{bmatrix}\) , where \( a\in\mathbb{R}_-^*\) is a real number such as \( a<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}=a\) is the opposite of the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the opposite of the length of the vector \( \overrightarrow{u}\) .

  • \( y_{u}\) is equal to \( 0\) .

Proof

It is the definition of the coordinates of a vector aligned with the \( x\) axis, combined with the fact that its abscissa is negative.

3.8 Vector negatively along the \( y\) axis

Example of vector negatively along the y) axis
Figure 10. Example of vector negatively along the \( y\) axis

Theorem 8

For a vector \( \overrightarrow{u}=\begin{bmatrix}0\\ b\end{bmatrix}\) , where \( b\in\mathbb{R}_-^*\) is a real number such as \( b<0\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}\) is equal to \( 0\) .

  • \( y_{u}=b\) is the opposite of the distance between the origin and the end of the vector \( \overrightarrow{u}\) , that is the opposite of the length of the vector \( \overrightarrow{u}\) .

Proof

It is the definition of the coordinates of a vector aligned with the \( y\) axis, combined with the fact that its ordinate is negative.

3.9 The null vector

Theorem 9

For the null vector \( \overrightarrow{u}=\overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) , its abscissa \( x_{u}\) and its ordinate \( y_{u}\) are the following:

  • \( x_{u}\) is equal to \( 0\) .

  • \( y_{u}\) is equal to \( 0\) as well.

Proof

It is the definition of the coordinates of the null vector.

4 Add and Subtract Two Vectors

4.1 The addition and subtraction element by element

Definition 1

Assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) .

Then we define the sum and the difference of \( \overrightarrow{u}\) and \( \overrightarrow{v}\) the following way:

  • \( \overrightarrow{u}+\overrightarrow{v}=\begin{bmatrix}x+z\\ y+t\end{bmatrix}\) is the sum element by element of \( \overrightarrow{u}\) and \( \overrightarrow{v}\) ,

  • \( \overrightarrow{u}-\overrightarrow{v}=\begin{bmatrix}x-z\\ y-t\end{bmatrix}\) is the difference element by element of \( \overrightarrow{u}\) and \( \overrightarrow{v}\)

4.2 Properties of the addition of vectors

Theorem 10

Assume \( (x_1,y_1,x_2,y_2,x_3,y_3)\in\mathbb{R}^6\) are real numbers, and consider the three column vectors with 2 elements, \( \overrightarrow{u}_1=\begin{bmatrix}x_1\\ y_1\end{bmatrix}\) , \( \overrightarrow{u}_2=\begin{bmatrix}x_2\\ y_2\end{bmatrix}\) and \( \overrightarrow{u}_3=\begin{bmatrix}x_3\\ y_3\end{bmatrix}\) .

Then the following assertions hold:

  • \( \overrightarrow{u}_1+\overrightarrow{u}_2=\overrightarrow{u}_2+\overrightarrow{u}_1\) : the addition of vectors is commutative.

  • \( (\overrightarrow{u}_1+\overrightarrow{u}_2)+\overrightarrow{u}_3=\overrightarrow{u}_1+(\overrightarrow{u}_2+\overrightarrow{u}_3)\) : the addition of vectors is associative.

  • \( \overrightarrow{u_{1}}+\overrightarrow{0}=\overrightarrow{0}+\overrightarrow{u_{1}} =\overrightarrow{u_{1}}\) : the null vector \( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) is neutral for the addition of vectors.

  • The opposite \( -\overrightarrow{u_{1}}=\begin{bmatrix}-x_{1}\\ {-y_{1}}\end{bmatrix}\) of \( \overrightarrow{u_{1}}\) is its reciprocal for the addition of vectors: \( \overrightarrow{u_{1}}+(-\overrightarrow{u_{1}}) =(-\overrightarrow{u_{1}})+\overrightarrow{u_{1}}=\overrightarrow{0}\)

Proof

Assume \( (x_1,y_1,x_2,y_2,x_3,y_3)\in\mathbb{R}^6\) are real numbers, and consider the three column vectors with 2 elements, \( \overrightarrow{u}_1=\begin{bmatrix}x_1\\ y_1\end{bmatrix}\) , \( \overrightarrow{u}_2=\begin{bmatrix}x_2\\ y_2\end{bmatrix}\) and \( \overrightarrow{u}_3=\begin{bmatrix}x_3\\ y_3\end{bmatrix}\) .

Commutativity The addition of vectors is commutative because it is so for the addition of real numbers.

Indeed:

\[ \begin{equation}\overrightarrow{u}_1+\overrightarrow{u}_2=\begin{bmatrix}x_1+x_{2}\\ y_1+y_{2}\end{bmatrix}=\begin{bmatrix}x_2+x_{1}\\ y_2+y_{1}\end{bmatrix}=\overrightarrow{u}_2+\overrightarrow{u}_1\end{equation} \]

(1)

Associativity The addition of vectors is associative because it is so for the addition of real numbers.

Indeed:

\[ \begin{eqnarray}(\overrightarrow{u}_1+\overrightarrow{u}_2)+\overrightarrow{u}_3 & = & \begin{bmatrix}x_1+x_{2}\\ y_1+y_{2}\end{bmatrix}+\begin{bmatrix}x_3\\ y_3\end{bmatrix} \\ & = & \begin{bmatrix}(x_1+x_{2})+x_{3}\\ (y_1+y_{2})+y_{3}\end{bmatrix}\\ & = & \begin{bmatrix}x_1+(x_{2}+x_{3})\\ y_1+(y_{2}+y_{3})\end{bmatrix}\\ & = & \begin{bmatrix}x_1\\ y_1\end{bmatrix}+\begin{bmatrix}x_{2}+x_3\\ y_{2}+y_3\end{bmatrix}\\ & = & \overrightarrow{u}_1+(\overrightarrow{u}_2+\overrightarrow{u}_3)\\ & = & \overrightarrow{u}_1+(\overrightarrow{u}_2+\overrightarrow{u}_3)\\ \\ \\ \\ \\\end{eqnarray} \]

(2)

Neutral Element The null vector \( \overrightarrow{0}=\begin{bmatrix}0\\ 0\end{bmatrix}\) is neutral for the addition of vectors because \( 0\) is neutral for the addition of real numbers.

Indeed:

\[ \begin{equation}\overrightarrow{u}_1+\overrightarrow{0}=\begin{bmatrix}x_1+0\\ y_1+0\end{bmatrix}=\begin{bmatrix}x_{1}\\ y_{1}\end{bmatrix}=\overrightarrow{u}_1\end{equation} \]

(3)

and

\[ \begin{equation}\overrightarrow{0}+\overrightarrow{u}_1=\begin{bmatrix}0+x_1\\ 0+y_1\end{bmatrix}=\begin{bmatrix}x_{1}\\ y_{1}\end{bmatrix}=\overrightarrow{u}_1\end{equation} \]

(4)

Opposite of a Vector The opposite \( -\overrightarrow{u_{1}}=\begin{bmatrix}-x_{1}\\ {-y_{1}}\end{bmatrix}\) of \( \overrightarrow{u_{1}}\) is its reciprocal for the addition of vectors because the opposite of a real number is its reciprocal for the addition of real numbers.

Indeed:

\[ \begin{equation}\overrightarrow{u}_1+(-\overrightarrow{u}_{1})=\begin{bmatrix}x_1\\ y_1\end{bmatrix}+\begin{bmatrix}-x_1\\ -y_1\end{bmatrix}=\begin{bmatrix}x_1+(-x_{1})\\ y_1+(-y_1)\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\end{equation} \]

(5)

and

\[ \begin{equation}\overrightarrow{u}_1+(-\overrightarrow{u}_{1})=\begin{bmatrix}-x_1\\ -y_1\end{bmatrix}+\begin{bmatrix}x_1\\ y_1\end{bmatrix}=\begin{bmatrix}(-x_{1})+x_1\\ (-y_1)+y_1\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\end{equation} \]

(6)

4.3 The Geometric View of the Opposite of a Vector

The geometric view of the opposite of a vector
Figure 11. The geometric view of the opposite of a vector

Theorem 11

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and its opposite \( -\overrightarrow{u}=\begin{bmatrix}-x\\ {-y}\end{bmatrix}\) .

Then the following assertions hold:

  • \( -\overrightarrow{u}\) is aligned with \( \overrightarrow{u}\) ,

  • \( -\overrightarrow{u}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) ,

  • and \( \left\|-\overrightarrow{u}\right\|=\left\|\overrightarrow{u}\right\|\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and its opposite \( -\overrightarrow{u}=\begin{bmatrix}-x\\ {-y}\end{bmatrix}\) .

Then \( -\overrightarrow{u}=\begin{bmatrix}-x\\ -y\end{bmatrix} =\begin{bmatrix}(-1)x\\ (-1)y\end{bmatrix}\) , so that the abscissa and ordinate of \( -\overrightarrow{u}\) are proportionnal to the abscissa and ordinate of \( \overrightarrow{u}\) .

Consequently, \( -\overrightarrow{u}\) is aligned with \( \overrightarrow{u}\) .

Moreover, the abscissa and ordinate of \( -\overrightarrow{u}\) have the oppposite signs than the abscissa and ordinate of \( \overrightarrow{u}\) respectively.

Consequently, \( -\overrightarrow{u}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) .

Endly, \( \left\|-\overrightarrow{u}\right\| =\sqrt{(-x)^{2}+(-y)^{2}}=\sqrt{x^{2}+y^{2}} =\left\|\overrightarrow{u}\right\|\) .

4.4 The canonical vector plane \( \mathbb{P}\)

The set \( \mathbb{P}\) of column vectors with two real elements defines the canonical vector plane.

The addition and subtraction are defined in as the operations element by element.

Then, because of the different properties of the addition of vectors, \( (\mathbb{P},+)\) is a commutative group.

4.5 The Geometrical View of the Addition of Vectors

The geometric view of the addition of vector
Figure 12. The geometric view of the addition of vector

Two vectors \( \overrightarrow{u}\) and \( \overrightarrow{v}\) in \( \mathbb{P}\) may be represented as “arrows” in the plane, with the end of \( \overrightarrow{u}\) coinciding with the beginning of \( \overrightarrow{v}\) .

We may then build a parallelogram with the beginning of a new version of \( \overrightarrow{v}\) coinciding with the beginning of \( \overrightarrow{u}\) , and the end of \( \overrightarrow{v}\) coinciding with the beginning of a new version of \( \overrightarrow{u}\) .

Then the diagonal of the parallelogram starting from the beginnings of the two vectors represents the sum \( \overrightarrow{u}+\overrightarrow{v}=\overrightarrow{v}+\overrightarrow{u}\) of the two vectors.

Proof (of the last assertion)

Assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) .

Consider the vector \( \overrightarrow{w}=\begin{bmatrix}r\\ s\end{bmatrix}\) built as on the figure 12.

Let’s prove geometrically that \( r=x+z\) , considering different cases of signs of \( x\) and \( z\) .

The proof that \( s=y+t\) is made similarly, with the \( y\) axis instead of the \( x\) axis.

The different cases are the followings:

  1. \( x\) or \( z\) equal to \( 0\) .

  2. \( x\) and \( z\) positive.

  3. \( x\) and \( z\) of different signs.

  4. \( x\) and \( z\) negative.

Let’s make the proof.

  1. Assume that \( x\) or \( z\) is equal to \( 0\) .

    As the construction of the figure 12 is commutative, we may assume that \( x=0\) .

    1. Assume that \( z=0\) .

      Add vectors with both abscissae equal to 0)
      Figure 13. Add vectors with both abscissae equal to \( 0\)

      If, starting from the origin, we draw end-to-beginning the vertical vectors \( \overrightarrow{u}=\begin{bmatrix}0\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}0\\ t\end{bmatrix}\) , then the resulting vector \( \overrightarrow{w}=\begin{bmatrix}r\\ s\end{bmatrix}\) is vertical as well.

      Consequently, \( s=0\) , that is equal to \( x+z\) .

    2. Assume that \( z>0\) .

      Add vectors with x=0) and z&amp;#62;0)
      Figure 14. Add vectors with \( x=0\) and \( z>0\)

      Starting from the origin, we draw end-to-beginning the vertical vector

      \( \overrightarrow{u}=\begin{bmatrix}0\\ y\end{bmatrix}\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) .

      Then the distances to the origin of the projections on the \( x\) axis of the ends of the vector \( \overrightarrow{v}\) and the resulting vector \( \overrightarrow{w}\) are both equal to \( z\) .

      Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the right of the origin, so that \( s=|s|=z\) .

      Consequently, \( s=z\) , that is equal to \( x+z\) because \( x=0\) .

    3. Assume that \( z<0\) .

      Add vectors with x=0) and z&amp;#60;0)
      Figure 15. Add vectors with \( x=0\) and \( z<0\)

      Starting from the origin, we draw end-to-beginning the vertical vector

      \( \overrightarrow{u}=\begin{bmatrix}0\\ y\end{bmatrix}\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) .

      Then the distances to the origin of the projections on the \( x\) axis of the ends of the vector \( \overrightarrow{v}\) and the resulting vector \( \overrightarrow{w}\) are both equal to \( |z|\) , so that \( |s|=|z|=-z\) .

      Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the left of the origin, so that \( s=-|s|=-(-z)=z\) .

      Consequently, \( s=z\) , that is equal to \( x+z\) because \( x=0\) .

  2. Assume that \( x\) and \( z\) are both positive.

    Add vectors with x&amp;#62;0) and z&amp;#62;0)
    Figure 16. Add vectors with \( x>0\) and \( z>0\)

    Starting from the origin, we draw end-to-beginning the vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z>0\) .

    Then the distances between the projections on the \( x\) axis of the ends of the different vectors are the following.

    • For the vector \( \overrightarrow{u}\) , it is \( x\) .

    • For the vector \( \overrightarrow{v}\) , it is \( z\) .

    • And for the vector \( \overrightarrow{w}\) , it is \( x+z\) , so that \( |s|=x+z\) .

    • Moreover, the projection of the end of the vector \( \overrightarrow{w}\) on the \( x\) axis is to the right of the origin, so that \( s=|s|=x+z\) .

  3. Assume that \( x\) and \( z\) are of different signs.

    As the construction of the figure 12 is commutative, we may assume that \( x>0\) and thus \( z<0\) .

    1. Assume that \( x>|z|\) .

      Add vectors with x&amp;#62;0) , z&amp;#60;0) and x&amp;#62;|z|)
      Figure 17. Add vectors with \( x>0\) , \( z<0\) and \( x>|z|\)

      Starting from the origin, we draw end-to-beginnig the vector

      \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) and \( x>|z|\) .

      Then the distances tbetween the projections on the \( x\) axis of the beginnigs and the ends of the different vectors are the following.

      • For the vector \( \overrightarrow{u}\) , it is \( x\) .

      • For the vector \( \overrightarrow{v}\) , it is \( |z|=-z\) .

      • And for the vector \( \overrightarrow{w}\) , it is \( x-|z|=x+z\) , so that \( |s|=x+z\) .

      • Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the right of the origin, so that \( s=|s|=x+z\) .

    2. Assume that \( x=|z|\) , so that \( z=-x\) .

      Add vectors with x&amp;#62;0) , z&amp;#60;0) and z=-x)
      Figure 18. Add vectors with \( x>0\) , \( z<0\) and \( z=-x\)

      Starting from the origin, we draw end-to-beginnig the vector

      \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) and \( z=-x\) .

      Then the distances between the projections on the \( x\) axis of the beginnigs and the ends of the vectors \( \overrightarrow{u}\) and \( \overrightarrow{v}\) , is \( x=|z|=-z\) .

      Moreover, the vector \( \overrightarrow{w}\) is along the \( y\) axis, so that \( s=0=x+z\) .

      Consequently, \( s=x+z\) .

    3. Assume that \( x<|z|\) .

      Add vectors with x&amp;#62;0) , z&amp;#60;0) and x&amp;#60;|z|)
      Figure 19. Add vectors with \( x>0\) , \( z<0\) and \( x<|z|\)

      Starting from the origin, we draw end-to-beginnig the vector

      \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x>0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) and \( x>|z|\) .

      Then the distances tbetween the projections on the \( x\) axis of the beginnigs and the ends of the different vectors are the following.

      • For the vector \( \overrightarrow{u}\) , it is \( x\) .

      • For the vector \( \overrightarrow{v}\) , it is \( |z|=-z\) .

      • And for the vector \( \overrightarrow{w}\) , it is \( |z|-x=-(x+z)\) , so that \( |s|=-(x+z)\) .

      • Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the left of the origin, so that \( s=-|s|=x+z\) .

  4. Assume that \( x\) and \( z\) are both negative.

    Add vectors with x&amp;#60;0) and z&amp;#60;0)
    Figure 20. Add vectors with \( x<0\) and \( z<0\)

    Starting from the origin, we draw end-to-beginnig the vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with \( x<0\) and the vector \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , with \( z<0\) .

    Then the distances tbetween the projections on the \( x\) axis of the beginnigs and the ends of the different vectors are the following.

    • For the vector \( \overrightarrow{u}\) , it is \( |x|=-x\) .

    • For the vector \( \overrightarrow{v}\) , it is \( |z|=-z\) .

    • And for the vector \( \overrightarrow{w}\) , it is \( |x|+|z|=-(x+z)\) , so that \( |s|=-(x+z)\) .

    • Moreover, the projection on the \( x\) axis of the end of the vector \( \overrightarrow{w}\) is to the left of the origin, so that \( s=-|s|=x+z\) .

4.6 Subtract a vector is add its opposite

Theorem 12

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.

Then the following assertion holds:

\[ \begin{equation}\overrightarrow{u}-\overrightarrow{v}=\overrightarrow{u}+(-\overrightarrow{v})\end{equation} \]

(7)

Proof

Subtract a vector is add its opposite because subtract a real number is add its opposite.

Indeed, if we assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and if we consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) , then:

\[ \begin{equation}\overrightarrow{u}-\overrightarrow{v}=\begin{bmatrix}x+z\\ y_1+t\end{bmatrix}=\begin{bmatrix}x+(-z)\\ y_1+(-t)\end{bmatrix}=\overrightarrow{u}+(-\overrightarrow{v})\end{equation} \]

(8)

4.7 The addition and subtraction mutual reciprocity

Theorem 13

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.

Then the following assertions hold:

  • \( (\overrightarrow{u}+\overrightarrow{v})-\overrightarrow{v}=\overrightarrow{u}\)

  • \( (\overrightarrow{u}-\overrightarrow{v})+\overrightarrow{v}=\overrightarrow{u}\)

Corollary 1

Assume \( (\overrightarrow{u},\overrightarrow{v},\overrightarrow{w})\in\mathbb{P}^3\) are column vectors with two real elements.

Then the following equivalences hold:

  1. \( \overrightarrow{w}=\overrightarrow{u}+\overrightarrow{v} \Leftrightarrow \overrightarrow{u}=\overrightarrow{w}-\overrightarrow{v}\)

  2. \( \overrightarrow{w}=\overrightarrow{u}-\overrightarrow{v} \Leftrightarrow \overrightarrow{u}=\overrightarrow{w}+\overrightarrow{v}\)

  3. \( \overrightarrow{w}=\overrightarrow{v}-\overrightarrow{u} \Leftrightarrow \overrightarrow{u}=\overrightarrow{v}-\overrightarrow{w}\)

Proof (of theorem 13)

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.

Then, because of the followint facts:

  1. subtract a vector is add its opposite,

  2. the addition of vectors is associative,

  3. the opposite of a vector is its reciprocal for the addition of vectors,

  4. and the null vector is neutral for the addition of vectors,

the following calculations may be performed:

  • \( (\overrightarrow{u}+\overrightarrow{v})-\overrightarrow{v} =(\overrightarrow{u}+\overrightarrow{v})+(-\overrightarrow{v}) =\overrightarrow{u}+(\overrightarrow{v}+(-\overrightarrow{v})) =\overrightarrow{u}+\overrightarrow{0} =\overrightarrow{u}\)

  • \( (\overrightarrow{u}-\overrightarrow{v})+\overrightarrow{v} =(\overrightarrow{u}+(-\overrightarrow{v}))+\overrightarrow{v} =\overrightarrow{u}+((-\overrightarrow{v})+\overrightarrow{v}) =\overrightarrow{u}+\overrightarrow{0} =\overrightarrow{u}\)

Proof (of corrollary 1)

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.

  1. If we subtract \( \overrightarrow{v}\) to both members of the equality \( \overrightarrow{w}=\overrightarrow{u}+\overrightarrow{v}\) , we see that it is equivalent to: \( \overrightarrow{w}-\overrightarrow{v}=(\overrightarrow{u}+\overrightarrow{v})-\overrightarrow{v}=\overrightarrow{u}\) , because of theorem 13.

  2. If we add \( \overrightarrow{v}\) to both members of the equality \( \overrightarrow{w}=\overrightarrow{u}-\overrightarrow{v}\) , we see that it is equivalent to: \( \overrightarrow{w}+\overrightarrow{v}=(\overrightarrow{u}-\overrightarrow{v})+\overrightarrow{v}=\overrightarrow{u}\) , because of theorem 13.

  3. If we add \( \overrightarrow{u}\) to both members of the equality \( \overrightarrow{w}=\overrightarrow{v}-\overrightarrow{u}\) , we see that it is equivalent to: \( \overrightarrow{w}+\overrightarrow{u}=(\overrightarrow{v}-\overrightarrow{u})+\overrightarrow{u}=\overrightarrow{v}\) , because of theorem 13.

    Because the addition of vectors is commutative, the last equality is equivalent to: \( \overrightarrow{v}=\overrightarrow{u}+\overrightarrow{w}\) .

    And if we exchange the roles of \( \overrightarrow{v}\) and \( \overrightarrow{w}\) in item (I), we see that the last equality is equivalent to: \( \overrightarrow{u}=\overrightarrow{v}-\overrightarrow{w}\)

5 Multiply or Divide a Vector by a Scalar

5.1 The multiplication and division of a vector by a scalar

Definition 2

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Assume \( \lambda\in\mathbb{R}\) is a real number.

Then we multiply and divide the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) the following way:

  • \( \lambda\overrightarrow{u}=\begin{bmatrix}\lambda x\\ \lambda y\end{bmatrix}\) is the product element by element of \( \overrightarrow{u}\) by \( \lambda\) ,

  • and, provided \( \lambda\neq 0\) , \( \frac{\overrightarrow{u}}{\lambda}=\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix}\) is the quotient element by element of \( \overrightarrow{u}\) by \( \lambda\) .

5.2 Geometric properties of vectors multiplied and divided by scalars

5.2.1 Multiply or divide a non zero vector by the scalar \( 1\)

Multiply or divide a non zero vector by the scalar 1)
Figure 21. Multiply or divide a non zero vector by the scalar \( 1\)

Theorem 14

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .

Then the following assertions hold:

  • If we multiply the vector \( \overrightarrow{u}\) by the scalar \( 1\) , we obtain the vector \( \overrightarrow{u}\) : \( 1.\overrightarrow{u}=\overrightarrow{u}\) .

  • If we divide the vector \( \overrightarrow{u}\) by the scalar \( 1\) , we obtain the vector \( \overrightarrow{u}\) : \( \frac{\overrightarrow{u}}{1}=\overrightarrow{u}\) .

Proof

That theorem will be proved regardless the fact that the vector isn’t the null vector.

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Then we may perform the following calculations:

  • \( 1.\overrightarrow{u}=\begin{bmatrix}1\times x\\ 1\times y\end{bmatrix} =\begin{bmatrix}x\\ y\end{bmatrix}=\overrightarrow{u}\) ,

  • and \( \frac{\overrightarrow{u}}{1}=\begin{bmatrix}\frac{x}{1}\\ \frac{y}{1}\end{bmatrix} =\begin{bmatrix}x\\ y\end{bmatrix}=\overrightarrow{u}\) .

5.2.2 Multiply or divide a non zero vector by the scalar \( -1\)

Multiply or divide a non zero vector by the scalar -1)
Figure 22. Multiply or divide a non zero vector by the scalar \( -1\)

Theorem 15

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .

Then the following assertions hold:

  • If we multiply the vector \( \overrightarrow{u}\) by the scalar \( -1\) , we obtain the opposite \( -\overrightarrow{u}\) of the vector \( \overrightarrow{u}\) : \( (-1).\overrightarrow{u}=-\overrightarrow{u}\) .

  • If we divide the vector \( \overrightarrow{u}\) by the scalar \( -1\) , we obtain the opposite \( -\overrightarrow{u}\) of the vector \( \overrightarrow{u}\) : \( \frac{\overrightarrow{u}}{-1}=-\overrightarrow{u}\) .

Proof

That theorem will be proved regardless the fact that the vector isn’t the null vector.

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Then we may perform the following calculations:

  • \( (-1).\overrightarrow{u}=\begin{bmatrix}(-1)\times x\\ (-1)\times y\end{bmatrix} =\begin{bmatrix}-x
    -y\end{bmatrix}=-\overrightarrow{u}\) ,

  • and \( \frac{\overrightarrow{u}}{-1}=\begin{bmatrix}\frac{x}{-1}\\ \frac{y}{-1}\end{bmatrix} =\begin{bmatrix}-x\\ {-y}\end{bmatrix}=-\overrightarrow{u}\) .

5.2.3 Multiply a non zero vector by a positive scalar

Multiply a non zero vector a positive scalar
Figure 23. Multiply a non zero vector a positive scalar

Theorem 16

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector
\( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as:

  • \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,

  • and \( \overrightarrow{v}\) is in the same direction as \( \overrightarrow{u}\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Assume \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) , and consider the column vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .

Then \( \overrightarrow{v}=\begin{bmatrix}\lambda x\\ \lambda y\end{bmatrix}\) , so that the abscissa and ordinate of \( \overrightarrow{v}\) are proportionnal to the abscissa and ordinate of \( \overrightarrow{u}\) .

Consequently, \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) .

Moreover, because \( \lambda>0\) , the abscissa and ordinate of \( \overrightarrow{v}\) have the same signs as the abscissa and ordinate of \( \overrightarrow{u}\) respectively.

Consequently, \( \overrightarrow{v}\) is in the same direction as \( \overrightarrow{u}\) .

5.2.4 Divide a non zero vector by a positive scalar

Divide a non zero vector a positive scalar
Figure 24. Divide a non zero vector a positive scalar

Theorem 17

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) .

Then, if we divide the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) such as:

  • \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,

  • \( \overrightarrow{v}\) is in the same direction as \( \overrightarrow{u}\) ,

  • and \( \overrightarrow{v}=\frac{1}{\lambda}\overrightarrow{u}\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Assume \( \lambda\in\mathbb{R}_+^*\) is a real number such as \( \lambda>0\) , and consider the column vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) .

Then \( \overrightarrow{v} =\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix} =\begin{bmatrix}\frac{1}{\lambda}x\\ \frac{1}{\lambda}y\end{bmatrix} =\frac{1}{\lambda}\overrightarrow{u}\) , which proves the last item of the theorem.

Moreover, as \( \frac{1}{\lambda}>0\) , the first two items derive from theorem 16

5.2.5 Multiply a non zero vector by a negative scalar

Multiply a non zero vector a negative scalar
Figure 25. Multiply a non zero vector a negative scalar

Theorem 18

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector
\( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as:

  • \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,

  • \( \overrightarrow{v}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) ,

  • and \( \overrightarrow{v}\) is the opposite \( -\left|\lambda\right|\overrightarrow{u}\) of the product of \( \overrightarrow{u}\) by the absolute value of \( \lambda\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Assume \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) , and consider the column vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .

Then \( \overrightarrow{v}=\begin{bmatrix}\lambda x\\ \lambda y\end{bmatrix}\) , so that the abscissa and ordinate of \( \overrightarrow{v}\) are proportionnal to the abscissa and ordinate of \( \overrightarrow{u}\) .

Consequently, \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) .

Moreover, because \( \lambda<0\) , the abscissa and ordinate of \( \overrightarrow{v}\) have the oppposite signs than the abscissa and ordinate of \( \overrightarrow{u}\) respectively.

Consequently, \( \overrightarrow{v}\) is in the direction opposite to the direction of \( \overrightarrow{u}\) .

Endly, as \( \lambda<0\) , it is the opposite \( -\left|\lambda\right|\) of the absolute value of \( \lambda\) .

Consequently, the following calculations may be performed:

\( \overrightarrow{v} =\begin{bmatrix}(-\left|\lambda\right|) x\\ (-\left|\lambda\right|) y\end{bmatrix} =\begin{bmatrix}-\left|\lambda\right| x\\ -\left|\lambda\right| y\end{bmatrix} =-\left|\lambda\right|\begin{bmatrix} x\\ y\end{bmatrix} =-\left|\lambda\right|\overrightarrow{u}\)

5.2.6 Divide a non zero vector by a negative scalar

Divide a non zero vector a negative scalar
Figure 26. Divide a non zero vector a negative scalar

Theorem 19

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) .

Then, if we divide the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) such as:

  • \( \overrightarrow{v}\) is aligned with \( \overrightarrow{u}\) ,

  • \( \overrightarrow{v}\) is in the opposite direction of the direction of \( \overrightarrow{u}\) ,

  • and \( \overrightarrow{v}\) is the opposite \( -\frac{1}{\left|\lambda\right|}\overrightarrow{u}\) of the product of \( \overrightarrow{u}\) by the inverse of the absolute value of \( \lambda\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers that are not zero together, and consider the non zero column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Assume \( \lambda\in\mathbb{R}_-^*\) is a real number such as \( \lambda<0\) , and consider the column vector \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda}\) .

Then \( \overrightarrow{v} =\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix} =\begin{bmatrix}\frac{1}{\lambda}x\\ \frac{1}{\lambda}y\end{bmatrix} =\frac{1}{\lambda}\overrightarrow{u}\) .

Consequently, as \( \frac{1}{\lambda}<0\) , the first two items derive from theorem 18.

Endly, \( \lambda=-|\lambda|\) the opposite of the absolute value of \( \lambda\) , and its inverse \( \frac{1}{\lambda}\) is the opposite \( -\frac{1}{\left|\lambda\right|}\) of the inverse of the absolute value of \( \lambda\) .

Consequently, the following calculations may be performed:

\( \overrightarrow{v} =\begin{bmatrix}-\left(\frac{1}{\left|\lambda\right|}\right)x\\ {-}\left(\frac{1}{\left|\lambda\right|}\right)y\end{bmatrix} =\begin{bmatrix}-\frac{1}{\left|\lambda\right|}x\\ {-}\frac{1}{\left|\lambda\right|}y\end{bmatrix} =-\frac{1}{\left|\lambda\right|}\overrightarrow{u}\)

5.2.7 Multiply a vector by the scalar \( 0\)

Multiply a vector by the scalar 0)
Figure 27. Multiply a vector by the scalar \( 0\)

Theorem 20

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements.

Then the following assertions hold:

  • If we multiply the vector \( \overrightarrow{u}\) by the scalar \( 0\) , we obtain the null vector \( \overrightarrow{0}\) : \( (0).\overrightarrow{u}=\overrightarrow{0}\) .

  • The vector \( \overrightarrow{u}\) can not be divided by the scalar \( 0\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Then we may perform the following calculations:

  • \( 0.\overrightarrow{u}=\begin{bmatrix}0\times x\\ 0\times y\end{bmatrix} =\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\)

  • The vector \( \overrightarrow{u}\) can not be divided by the scalar \( 0\) because its coordinates can not be divided by \( 0\) .

5.2.8 Multiply or divide the null vector by a scalar

Theorem 21

Assume \( \lambda\in\mathbb{R}\) is a real number.

Then the following assertions hold:

  • If we multiply the null vector \( \overrightarrow{0}\) by the scalar \( \lambda\) , we obtain the null vector \( \overrightarrow{0}\) : \( (0).\overrightarrow{u}=\overrightarrow{0}\) .

  • If \( \lambda\ne 0\), and if we divide the null vector \( \overrightarrow{0}\) by the non zero scalar \( \lambda\) , we obtain the null vector \( \overrightarrow{0}\) : \( \frac{\overrightarrow{0}}{\lambda}=\overrightarrow{0}\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Then we may perform the following calculations:

  • \( \lambda.\overrightarrow{0}=\begin{bmatrix}\lambda\times 0\\ \lambda\times 0\end{bmatrix} =\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\) ,

  • Assume \( \lambda\ne 0\).

    Then \( \frac{\overrightarrow{0}}{\lambda}=\begin{bmatrix}\frac{0}{\lambda}\\ \frac{0}{\lambda}\end{bmatrix} =\begin{bmatrix}0\\ 0\end{bmatrix}=\overrightarrow{0}\) .

5.3 A new view of the coordinates of a vector

Theorem 22

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Consider the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\) .

Then the coordinates of \( \overrightarrow{u}\) in the canonical base are:

  • its abscissa \( x\) ,

  • and its ordinate \( y\) .

Moreover, the following identity holds:

  • \( \overrightarrow{u}=x\overrightarrow{i}+y\overrightarrow{j}\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Consider the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\ 0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\ 1\end{bmatrix}\) .

Then, as we have seen in the paragraph 3, the abscissa of the vector \( \overrightarrow{u}\) in the canonical base is its first element \( x\) and its ordinate is its second element \( y\) .

Moreover, the following calculations may be performed:

\( x\overrightarrow{i}+y\overrightarrow{j} =x\begin{bmatrix}1\\ 0\end{bmatrix}+y\begin{bmatrix}0\\ 1\end{bmatrix} =\begin{bmatrix}x\times 1+y\times 0\\ x\times 0+y\times 1\end{bmatrix} =\begin{bmatrix}x\\ y\end{bmatrix} =\overrightarrow{u}\)

5.4 The multiplication and division of a vector by a non zero scalar mutual reciprocity

Theorem 23

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .

Then the following assertions hold:

  • \( \lambda\frac{\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) ,

  • and \( \frac{\lambda\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) .

Lemma 1

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .

Then the following assertion holds:

  • \( \frac{\overrightarrow{u}}{\lambda}=\frac{1}{\lambda}\overrightarrow{u}\) .

Lemma 2

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^{2}\) is are real numbers.

Then the following associativity property holds:

  • \( \alpha(\beta\overrightarrow{u})=(\alpha\beta)\overrightarrow{u}\)

Corollary 2

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .

Then the following equivalences hold:

  • \( \overrightarrow{v}=\lambda\overrightarrow{u} \Leftrightarrow \overrightarrow{u}=\frac{\overrightarrow{v}}{\lambda}\) ,

  • and \( \overrightarrow{v}=\frac{\overrightarrow{u}}{\lambda} \Leftrightarrow \overrightarrow{u}=\lambda\overrightarrow{v}\)

Proof (of the lemma 1)

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Assume \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .

Then we may perform the following calculations:

\( \frac{\overrightarrow{u}}{\lambda} =\begin{bmatrix}\frac{x}{\lambda}\\ \frac{y}{\lambda}\end{bmatrix} =\begin{bmatrix}\frac{1}{\lambda}x\\ \frac{1}{\lambda}y\end{bmatrix} =\frac{1}{\lambda}\begin{bmatrix}x\\ y\end{bmatrix} =\frac{1}{\lambda}\overrightarrow{u}\)

Proof (of the lemma 2)

The associativity property is a consequence of the associativity of the multiplication of real numbers.

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) with 2 elements.

Assume \( (\alpha,\beta)\in\mathbb{R}^{2}\) is are real numbers.

Then we may perform the following calculations:

\( \alpha(\beta\overrightarrow{u}) =\alpha\begin{bmatrix}\beta x\\ \beta y\end{bmatrix} =\begin{bmatrix}\alpha(\beta x)\\ \alpha(\beta y)\end{bmatrix} =\begin{bmatrix}(\alpha\beta) x)\\ (\alpha\beta) y)\end{bmatrix} =(\alpha\beta)\begin{bmatrix} x)\\ y)\end{bmatrix} =(\alpha\beta)\overrightarrow{u}\)

Proof (of the theorem 23)

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .

Then we may apply the lemmas 1 et 2 in the following calculations:

  • \( \lambda\frac{\overrightarrow{u}}{\lambda} =\lambda\left( \frac{1}{\lambda}\overrightarrow{u} \right) =\left( \lambda\frac{1}{\lambda} \right)\overrightarrow{u} =1.\overrightarrow{u} =\overrightarrow{u}\) ,

  • and \( \frac{\lambda\overrightarrow{u}}{\lambda} =\frac{1}{\lambda} (\lambda\overrightarrow{u}) =\left( \frac{1}{\lambda} \lambda \right)\overrightarrow{u} =1.\overrightarrow{u} =\overrightarrow{u}\) .

Proof (of the corollary 2)

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( \lambda\in\mathbb{R}^*\) is a real number such as \( \lambda\neq 0\) .

Then, if we divide by \( \lambda\) the two sides of the equality \( \overrightarrow{v}=\lambda\overrightarrow{u}\) , we see the it is equivalent to: \( \frac{\overrightarrow{v}}{\lambda}=\frac{\lambda\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) .

And if we multiply by \( \lambda\) the two sides of the equality \( \frac{\lambda\overrightarrow{u}}{\lambda}\) , we see the it is equivalent to: \( \lambda\overrightarrow{v}=\lambda\frac{\overrightarrow{u}}{\lambda}=\overrightarrow{u}\) .

6 The Structure of Vector Space of \( (\mathbb{P},+,.)\)

We shall now mix the addition of vectors and the multiplication of vectors by scalars to build a new kind of algbraic structure, the structure of vector space.

6.1 Addition and external multiplication joint properties

Theorem 24

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.

Then the following assertions hold:

  • First distributivity law: \( \alpha(\overrightarrow{u}+\overrightarrow{v})=\alpha\overrightarrow{u}+\alpha\overrightarrow{v}\)

  • Second distributivity law: \( (\alpha+\beta)\overrightarrow{u}=\alpha\overrightarrow{u}+\beta\overrightarrow{u}\)

  • Associativity law: \( \alpha(\beta\overrightarrow{u})=(\alpha\beta)\overrightarrow{u}\)

Proof

Assume \( (x,y,z,t)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements, \( \overrightarrow{u}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}z\\ t\end{bmatrix}\) .

Assume \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.

First distributivity law

The first distributivity law is a consequence of the distributivity of the multiplication on the addition in \( \mathbb{R}\) .

Indeed, we may perform the following calculations:

\( \alpha(\overrightarrow{u}+\overrightarrow{v}) =\alpha\left(\begin{bmatrix}x\\ y\end{bmatrix}+\begin{bmatrix}z\\ t\end{bmatrix}\right) =\alpha\begin{bmatrix}x+z\\ y+t\end{bmatrix} =\begin{bmatrix}\alpha(x+z)\\ \alpha(y+t)\end{bmatrix} =\begin{bmatrix}\alpha x+\alpha z\\ \alpha y+\alpha t\end{bmatrix}\)

\( =\begin{bmatrix}\alpha x\\ \alpha y\end{bmatrix}+\begin{bmatrix}\alpha z\\ \alpha t\end{bmatrix} =\alpha\begin{bmatrix} x\\ y\end{bmatrix}+\alpha\begin{bmatrix}z\\ t\end{bmatrix} =\alpha\overrightarrow{u}+\alpha\overrightarrow{v}\)

Second distributivity law

The second distributivity law is a consequence of the distributivity of the multiplication on the addition in \( \mathbb{R}\) as well.

Indeed, we may perform the following calculations:

\( (\alpha+\beta)\overrightarrow{u} =(\alpha+\beta)\begin{bmatrix}x\\ y\end{bmatrix} =\begin{bmatrix}(\alpha+\beta)x\\ (\alpha+\beta)y\end{bmatrix} =\begin{bmatrix}\alpha x+\beta x\\ \alpha y+\beta y\end{bmatrix} =\begin{bmatrix}\alpha x\\ \alpha y\end{bmatrix} +\begin{bmatrix}\beta x\\ \beta y\end{bmatrix}\)

\( =\alpha\begin{bmatrix} x\\ y\end{bmatrix} +\beta \begin{bmatrix} x\\ y\end{bmatrix} =\alpha\overrightarrow{u}+\beta\overrightarrow{u}\)

The associativity law is already stated in the lemma 2.

Because of the following properties:

  • \( (\mathbb{P},+)\) is a commutative group,

  • the two distributivity laws hold in \( \mathbb{P}\) ,

  • and the associativity law holds in \( \mathbb{P}\) ,

\( (\mathbb{P},+,.)\) has a structure of vector space.

And because of the following facts:

  • \( (\mathbb{P},+,\cdot)\) has a structure of vector space,

  • and the canonical base of \( \mathbb{P}\) is made of \( 2\) vectors,

The vector space is said to be of dimension \( 2\) , or to be a vector plane, which justifies the fact that we call it “the vector plane \( \mathbb{P}\) ”.

6.2 Subtraction and external multiplication joint properties

Theorem 25

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.

Then the following assertions hold:

  • Signs law: \( (-\alpha)\overrightarrow{u}=\alpha(-\overrightarrow{u})=-\alpha\overrightarrow{u}\)

  • First distributivity law: \( \alpha(\overrightarrow{u}-\overrightarrow{v})=\alpha\overrightarrow{u}-\alpha\overrightarrow{v}\)

  • Second distributivity law: \( (\alpha-\beta)\overrightarrow{u}=\alpha\overrightarrow{u}-\beta\overrightarrow{u}\)

Proof

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\alpha,\beta)\in\mathbb{R}^2\) are real numbers.

Signs law It is a consequence of the associativity law and of the following facts:

  • \( -\alpha=(-1)\times\alpha=\alpha\times(-1)\) ,

  • \( -\overrightarrow{u}=(-1)\overrightarrow{u}\) ,

  • and \( -\alpha\overrightarrow{u}=(-1)(\alpha\overrightarrow{u})\) .

Indeed, we may perform the following calculations:

\( (-\alpha)\overrightarrow{u} =(\alpha\times(-1))\overrightarrow{u} =\alpha((-1)\overrightarrow{u}) =\alpha(-\overrightarrow{u})\) ,

and

\( (-\alpha)\overrightarrow{u} =((-1)\times\alpha)\overrightarrow{u} =(-1)(\alpha\overrightarrow{u}) =-\alpha\overrightarrow{u}\) .

First distributivity law

The first distributivity law is a consequence of the signs law and the fact that subtract a vector is add its opposite.

Indeed, we may perform the following calculations:

\( \alpha(\overrightarrow{u}-\overrightarrow{v}) =\alpha(\overrightarrow{u}+(-\overrightarrow{v})) =\alpha\overrightarrow{u}+\alpha(-\overrightarrow{v}) =\alpha\overrightarrow{u}+(-\alpha\overrightarrow{v}) =\alpha\overrightarrow{u}-\alpha\overrightarrow{v}\)

Second distributivity law

The second distributivity law is a consequence of the signs law and the fact that subtract a real number is add its opposite.

Indeed, we may perform the following calculations:

\( (\alpha-\beta)\overrightarrow{u} =(\alpha+(-\beta))\overrightarrow{u} =\alpha\overrightarrow{u}+(-\beta)\overrightarrow{u} =\alpha\overrightarrow{u}+(-\beta\overrightarrow{u}) =\alpha\overrightarrow{u}-\beta\overrightarrow{u}\)

6.3 Consequences for the homotheties in \( \mathbb{P}\)

6.3.1 The homotheties are linear mappings in \( \mathbb{P}\)

Theorem 26

Assume \( \lambda\in\mathbb{R}\) is a real number, and consider the homothety of factor \( \lambda\) in \( \mathbb{P}\) :

\[ \begin{equation}\begin{matrix}h_{\lambda}:&\mathbb{P}&\rightarrow&\mathbb{P}\\ \\ &\overrightarrow{u}&\mapsto&h_{\lambda}(\overrightarrow{u})=\lambda\overrightarrow{u}\end{matrix}\end{equation} \]

(9)

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) is a real number.

Then the following assertions hold:

  • \( h_{\lambda}(\overrightarrow{u}+\overrightarrow{v}) =h_{\lambda}(\overrightarrow{u})+h_{\lambda}(\overrightarrow{v})\) ,

  • and \( h_{\lambda}(\alpha \overrightarrow{u})=\alpha h_{\lambda}(\overrightarrow{u})\) .

Because if these two properties, we say that:

The homotheties are linear mappings in \( \mathbb{P}\) .

Proof (of theorem 26)

Assume \( \lambda\in\mathbb{R}\) is a real number, and consider the homothety of factor \( \lambda\) in \( \mathbb{P}\) :

\[ \begin{equation}\begin{matrix}h_{\lambda}:&\mathbb{P}&\rightarrow&\mathbb{P}\\ \\ &\overrightarrow{u}&\mapsto&h_{\lambda}(\overrightarrow{u})=\lambda\overrightarrow{u}\end{matrix}\end{equation} \]

(10)

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements, and \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) is a real number.

Then the following calculations may be performed:

  • Thanks to the first distributivity law:

    \( h_{\lambda}(\overrightarrow{u}+\overrightarrow{v}) =\lambda(\overrightarrow{u}+\overrightarrow{v}) =\lambda\overrightarrow{u}+\lambda\overrightarrow{v} =h_{\lambda}(\overrightarrow{u})+h_{\lambda}(\overrightarrow{v})\) .

  • And thanks to the associativity law and the commutativity of the multiplication of real numbers:

    \( h_{\lambda}(\alpha \overrightarrow{u}) =(\lambda\alpha) \overrightarrow{u} =(\alpha\lambda) \overrightarrow{u} =\alpha(\lambda \overrightarrow{u}) =\alpha h_{\lambda}(\overrightarrow{u})\) .

6.3.2 Sum and composition of homotheties in \( \mathbb{P}\)

Theorem 27

Assume \( (\lambda,\mu)\in\mathbb{R}^2\) are real numbers, and consider the
homotheties of factors \( \lambda\) and \( \mu\) in \( \mathbb{P}\) :

\[ \begin{equation}\begin{matrix}h_{\lambda}:&\mathbb{P}&\rightarrow&\mathbb{P}\\ \\ &\overrightarrow{u}&\mapsto&h_{\lambda}(\overrightarrow{u})=\lambda\overrightarrow{u}\end{matrix}\end{equation} \]

(11)

and

\[ \begin{equation}\begin{matrix}h_{\mu}:&\mathbb{P}&\rightarrow&\mathbb{P}\\ \\ &\overrightarrow{u}&\mapsto&h_{\mu}(\overrightarrow{u})=\mu\overrightarrow{u}\end{matrix}\end{equation} \]

(12)

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements.

Then the following assertions hold:

  • \( h_{\lambda+\mu}(\overrightarrow{u})=h_{\lambda} (\overrightarrow{u})+h_{\mu}(\overrightarrow{u})\) ,

  • and \( h_{\lambda\mu}(\overrightarrow{u})=h_{\lambda}(h_{\mu}(\overrightarrow{u}))\) .

We say that \( h_{\lambda+\mu}=h_{\lambda} +h_{\mu}\) and \( h_{\lambda\mu}=h_{\lambda}\circ h_{\mu}\) .

And we may deduce from the theorem 27 que:

The set of the homotheties in \( \mathbb{P}\) is, with the addition \( +\) of applications and the composition \( \circ\) of applications, a commutative field isomorphic to the commutative field \( (\mathbb{R},+,\times)\)

Proof (of theorem 27)

Assume \( (\lambda,\mu)\in\mathbb{R}^2\) are real numbers, and consider the homotheties of factors \( \lambda\) and \( \mu\) in \( \mathbb{P}\) :

\[ \begin{equation}\begin{matrix}h_{\lambda}:&\mathbb{P}&\rightarrow&\mathbb{P}\\ \\ &\overrightarrow{u}&\mapsto&h_{\lambda}(\overrightarrow{u})=\lambda\overrightarrow{u}\end{matrix}\end{equation} \]

(13)

and

\[ \begin{equation}\begin{matrix}h_{\mu}:&\mathbb{P}&\rightarrow&\mathbb{P}\\ \\ &\overrightarrow{u}&\mapsto&h_{\mu}(\overrightarrow{u})=\mu\overrightarrow{u}\end{matrix}\end{equation} \]

(14)

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements.

Then the following calculations may be performed:

  • Thanks to the second distributivity law:

    \( h_{\lambda+\mu}(\overrightarrow{u}) =(\lambda+\mu)\overrightarrow{u} =\lambda\overrightarrow{u}+\mu\overrightarrow{u} =h_{\lambda} (\overrightarrow{u})+h_{\mu}(\overrightarrow{u})\) .

  • And thanks to the associativity law:

    \( h_{\lambda\mu}(\overrightarrow{u}) =(\lambda\mu)(\overrightarrow{u}) =\lambda(\mu(\overrightarrow{u})) =h_{\lambda}(h_{\mu}(\overrightarrow{u}))\) .

7 Conclusion

We have built the canonical vector plane \( \mathbb{P}\) , with (column) vectors that we can not only draw, but also add and subtract together, and multiply and divide by scalars, with useful properties that confer to \( (\mathbb{P},+,\cdot)\) a structure of vector space.

And we discovered our first linear mappings in \( \mathbb{P}\) , the homotheties.

I am normally hidden by the status bar