LaTex2Web logo

Documents Live, a web authoring and publishing system

If you see this, something is wrong

Collapse and expand sections

To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.

Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.

Cross-references and related material

Generally speaking, anything that is blue is clickable.

Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.

Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.

Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.

Discussions

By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.

If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.

Publications

The blue button below that says "table of contents" is your tool to navigate in a publication.

The left arrow brings you to the previous document in the publication, and the right one brings you to the next. Both cycle over the publication list.

The middle button that says "table of contents" reveals the publication table of contents. This table is hierarchical structured. It has sections, and sections can be collapsed or expanded. If you are a registered user, you can save the layout of the table of contents.

Table of contents

First published on Sunday, Jun 30, 2024 and last modified on Thursday, Apr 10, 2025

The Dot Product and the Norm in the Euclidean Plane

Fabienne Chaplais Mathedu SAS

1 Introduction

In that document, we introduce two inter-related notions for vectors in the plane \( \mathbb{P}\) , the dot product of two vectors and the norm of a vector.

2 The Dot Product of Two Vectors

2.1 Definition of the Dot Product

Definition 1

Assume \( (x_1,y_1,x_2,y_2)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements: \( \overrightarrow{u}_1=\begin{bmatrix}x_1\\y_1\end{bmatrix}\) and \( \overrightarrow{u}_2=\begin{bmatrix}x_2\\y_2\end{bmatrix}\) .

Then the dot product of \( \overrightarrow{u}_1\) and \( \overrightarrow{u}_2\) is defined the following way:

\[ \begin{equation}\overrightarrow{u}_1\cdot\overrightarrow{u}_2=x_1x_2+y_1y_2\end{equation} \]

(1)

2.2 Remarquable Dot Products

Theorem 1

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) with 2 elements.

Consider the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .

Then the following assertions hold:

  1. \( \overrightarrow{u}\cdot\overrightarrow{i}=x\) , the abscissa of \( \overrightarrow{u}\) ,

  2. \( \overrightarrow{u}\cdot\overrightarrow{j}=y\) , the ordinate of \( \overrightarrow{u}\) ,

  3. \( \overrightarrow{i}\cdot\overrightarrow{j}=0\) , which means that \( \overrightarrow{i}\) and \( \overrightarrow{j}\) are orthogonal (see next section).

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) with 2 elements.

Consider the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .

Then the following calculations may be done:

  1. \( \overrightarrow{u}\cdot\overrightarrow{i} =x\times 1+y\times 0 =x\)

  2. \( \overrightarrow{u}\cdot\overrightarrow{j} =x\times 0+y\times 1 =y\)

  3. \( \overrightarrow{i}\cdot\overrightarrow{j} =0\times 1+1\times 0 =0\)

2.3 A symmetrical bilinear form

Theorem 2

Assume \( (\overrightarrow{u},\overrightarrow{v},\overrightarrow{w})\in\mathbb{P}^3\) are column vectors with two real elements, and \( \alpha\in\mathbb{R}\) is a real number.

Then the following assertions hold, stating that the dot product is a symmetrical bilinear form:

  1. \( \overrightarrow{u}\cdot\overrightarrow{v}=\overrightarrow{v}\cdot\overrightarrow{u}\) ,

  2. \( (\overrightarrow{u}+\overrightarrow{v})\cdot\overrightarrow{w} =\overrightarrow{u}\cdot\overrightarrow{w}+\overrightarrow{v}\cdot\overrightarrow{w}\) ,

  3. \( \overrightarrow{w}\cdot(\overrightarrow{u}+\overrightarrow{v}) =\overrightarrow{w}\cdot\overrightarrow{u}+\overrightarrow{w}\cdot\overrightarrow{v}\) ,

  4. \( (\alpha\overrightarrow{u})\cdot\overrightarrow{v}=\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\) ,

  5. and \( \overrightarrow{u}\cdot(\alpha\overrightarrow{v})=\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\) ,

Proof

Assume \( (x_1,y_1,x_2,y_2,x_{3},y_{3})\in\mathbb{R}^6\) are real numbers, and consider the three column vectors with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x_1\\y_1\end{bmatrix}\) , \( \overrightarrow{v}=\begin{bmatrix}x_2\\y_2\end{bmatrix}\) and \( \overrightarrow{w}=\begin{bmatrix}x_3\\y_3\end{bmatrix}\) .

Assume \( \alpha\in\mathbb{R}\) is a real number.

Then the following calculations may be done:

  1. Because the multiplication in \( \mathbb{R}\) is commutative, we have:

    \( \overrightarrow{u}\cdot\overrightarrow{v} =x_{1}x_{2}+y_{1}y_{2} =x_{2}x_{1}+y_{2}y_{1} =\overrightarrow{v}\cdot\overrightarrow{u}\)

  2. Because, in \( \mathbb{R}\) , the multiplication is distributive on the addition, we have:

    \( (\overrightarrow{u}+\overrightarrow{v})\cdot\overrightarrow{w} =(x_{1}+x_{2})x_{3}+ (y_{1}+y_{2})y_{3} =x_{1}x_{3}+x_{2}x_{3}+ y_{1}y_{3}+y_{2}y_{3} =\overrightarrow{u}\cdot\overrightarrow{w}+\overrightarrow{v}.\overrightarrow{w}\)

  3. Because, in \( \mathbb{R}\) , the multiplication is distributive on the addition, we have:

    \( \overrightarrow{w}\cdot(\overrightarrow{u}+\overrightarrow{v}) =x_{3}(x_{1}+x_{2})+y_{3}(y_{1}+y_{2}) =x_{3}x_{1}+x_{3}x_{2}+y_{3}y_{1}+y_{3}y_{2} =\overrightarrow{w}\cdot\overrightarrow{u}+\overrightarrow{w}\cdot\overrightarrow{v}\)

  4. Because the multiplication in \( \mathbb{R}\) is associative, we have:

    \( (\alpha\overrightarrow{u})\cdot\overrightarrow{v} =(\alpha x_{1})x_{2}+ (\alpha y_{1})y_{2} =\alpha (x_{1}x_{2})+ \alpha (y_{1}y_{2}) =\alpha(x_{1}x_{2}+ y_{1}y_{2}) =\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\)

  5. Because the multiplication in \( \mathbb{R}\) is commutative and associative, we have:

    \( \overrightarrow{u}\cdot(\alpha\overrightarrow{v}) =x_{1}(\alpha x_{2})+y_{1}(\alpha y_{2}) =\alpha (x_{1}x_{2})+\alpha (y_{1}y_{2}) =\alpha(x_{1}x_{2}+y_{1}y_{2}) =\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\)

2.4 The structure of Euclidean Plane of \( \mathbb{P}\)

Definition 2

Because of the following facts:

  • \( (\mathbb{P},+,.)\) has a structure of vector space of dimension \( 2\) ,
  • and the dot product is a symmetrical bilinear form,

then we say that:

The vector plane \( \mathbb{P}\) is an euclidean plane.

That justifies the fact that we already called it so.

3 The Orthogonality of Two Vectors

The orthogonality of vectors is a major application of the dot product.

3.1 Definition of the orthogonality of two vectors

Definition 3

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.

Then \( \overrightarrow{u}\) and \( \overrightarrow{v}\) are said to be orthogonal if and only if their dot product \( \overrightarrow{u}\cdot\overrightarrow{v}\) is equal to \( 0\) . This is denoted \( \overrightarrow{u}\bot\overrightarrow{v}\) .

If any two vectors are orthogonal, then their graphical representations are perpendicular, and they make a right angle.

3.2 The orthogonality of the canonical base

The orthogonality of the canonical base
Figure 1. The orthogonality of the canonical base

Theorem 3

Consider the canonical base of the euclidean plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .

Then \( \overrightarrow{i}\bot\overrightarrow{j}\) .

Proof

This is because \( \overrightarrow{i}\cdot\overrightarrow{j}=0\) .

That is why they are represented as perpendicular to each other, or making a right angle on the drawings.

3.3 Other examples of orthogonal vectors

First example Here is another example of orthogonal vectors.

First example of orthogonal vectors
Figure 2. First example of orthogonal vectors

Theorem 4

Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{1}=\begin{bmatrix}1\\1\end{bmatrix}\) and \( \overrightarrow{v}_{1}=\begin{bmatrix}-1\\1\end{bmatrix}\) .

Then \( \overrightarrow{u}_{1}\bot\overrightarrow{v}_{1}\) .

Proof

Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{1}=\begin{bmatrix}1\\1\end{bmatrix}\) and \( \overrightarrow{v}_{1}=\begin{bmatrix}-1\\1\end{bmatrix}\) .

Then \( \overrightarrow{u}_{1}\cdot\overrightarrow{v}_{1} = 1\times(-1)+1\times 1=(-1)+1=0\) .

Second example Here is another example of orthogonal vectors.

Second example of orthogonal vectors
Figure 3. Second example of orthogonal vectors

Theorem 5

Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{2}=\begin{bmatrix}1\\2\end{bmatrix}\) and \( \overrightarrow{v}_{2}=\begin{bmatrix}2\\{-1}\end{bmatrix}\) .

Then \( \overrightarrow{u}_{2}\bot\overrightarrow{v}_{2}\) .

Proof

Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{2}=\begin{bmatrix}1\\2\end{bmatrix}\) and \( \overrightarrow{v}_{2}=\begin{bmatrix}2\\{-1}\end{bmatrix}\) .

Then \( \overrightarrow{u}_{2}\cdot\overrightarrow{v}_{2} = 1\times 2+2\times(-1)=2+(-2)=0\) .

Third example Here is another example of orthogonal vectors.

Third example of orthogonal vectors
Figure 4. Third example of orthogonal vectors

Theorem 6

Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{3}=\begin{bmatrix}1\\{-2}\end{bmatrix}\) and \( \overrightarrow{v}_{3}=\begin{bmatrix}-2\\{-1}\end{bmatrix}\) .

Then \( \overrightarrow{u}_{3}\bot\overrightarrow{3}_{2}\) .

Proof

Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{3}=\begin{bmatrix}1\\{-2}\end{bmatrix}\) and \( \overrightarrow{v}_{3}=\begin{bmatrix}-2\\{-1}\end{bmatrix}\) .

Then \( \overrightarrow{u}_{3}\cdot\overrightarrow{v}_{3} = 1\times (-2)+(-2)\times(-1)=(-2)+2=0\) .

4 The Norm of a Vector

We define the norm of a vector as a function of its coordinates, but it is a direct function of the dot product of the vector by itself.

As a consequence, the euclidean plane \( \mathbb{P}\) has a dot product and a norm in addition of the addition to vectors and the external product of a vector by a scalar.

4.1 Definition of the Norm of a Vector

Definition 4

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) .

Then the norm of the vector \( \overrightarrow{u}\) is defined the following way:

\[ \begin{equation} \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2} \end{equation} \]

(2)

4.2 Norms of Remarquable Vectors

Theorem 7

Consider the null vector and the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{0}=\begin{bmatrix}0\\0\end{bmatrix}\) , \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .

Then the following assertions hold:

  1. The norm of the null vector is \( \left\|\overrightarrow{0} \right\|=0\) .

  2. The norm of the vector \( \overrightarrow{i}\) is \( \left\|\overrightarrow{i} \right\|=1\) .

  3. The norm of the vector \( \overrightarrow{j}\) is \( \left\|\overrightarrow{j} \right\|=1\) as well.

Proof

Consider the null vector and the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{0}=\begin{bmatrix}0\\0\end{bmatrix}\) , \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .

Then the following calculations may be performed:

  1. \( \left\|\overrightarrow{0}\right\|=\sqrt{0^2+0^2}=\sqrt{0}=0\) .

  2. \( \left\|\overrightarrow{i}\right\|=\sqrt{1^{2}+0^{2}}=\sqrt{1}=1\) .

  3. \( \left\|\overrightarrow{j}\right\|=\sqrt{0^{2}+1^{2}}=\sqrt{1}=1\) .

Definition 5

Because of the following facts:

  • \( \overrightarrow{i}\bot\overrightarrow{j}\) ,

  • and \( \left\|\overrightarrow{i} \right\|=\left\|\overrightarrow{j} \right\|=1\) ,

then we say that:

The canonical base \( (\overrightarrow{i},\overrightarrow{j})\) is an orthonormal base of the euclidean plane \( \mathbb{P}\) .

4.3 Joint properties of the norm and the dot product

Theorem 8

Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements and \( \lambda\in\mathbb{R}\) is a real number.

Then the following assertions hold:

  1. The norm of a vector is based on the dot product of the vector with itself: \( \left\| \overrightarrow{u} \right\|=\sqrt{\overrightarrow{u}\cdot\overrightarrow{u}}\) .

  2. \( \left\| \overrightarrow{u} \right\|=0\) if and only if \( \overrightarrow{u}=\overrightarrow{0}\) .

  3. So that, if \( \overrightarrow{u}\neq\overrightarrow{0}\) then \( \overrightarrow{u}\neq\overrightarrow{0}\) .

  4. \( \overrightarrow{u}.\overrightarrow{u}=\left\| \overrightarrow{u} \right\|^2\) and \( (-\overrightarrow{u})\cdot\overrightarrow{u}=\overrightarrow{u}.(-\overrightarrow{u}) =-\left\| \overrightarrow{u} \right\|^2\) .

  5. Multiply a vector by a scalar \( \lambda\) multiplies its norm by the absolute value \( \left| \lambda \right| \) of \( \lambda\) : \( \left\| \lambda\overrightarrow{u} \right\| =\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) .

Proof

Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) .

Then the following calculations may be performed.

  1. \( \sqrt{\overrightarrow{u}\cdot\overrightarrow{u}}=\sqrt{x^{2}+y^{2}}=\left\| \overrightarrow{u} \right\|\)

  2. If \( \overrightarrow{u}=\overrightarrow{0}\) , then \( \left\| \overrightarrow{u} \right\|=0\) .

    Reversely, if \( \left\| \overrightarrow{u} \right\|=0\) , then \( x^{2}+y^{2}=0\) , with \( x^{2}\geq 0\) and \( y^{2}\geq 0\) , which leads to \( x=y=0\) , or \( \overrightarrow{u}=\overrightarrow{0}\) .

  3. If \( \overrightarrow{u}\neq\overrightarrow{0}\) , then \( \overrightarrow{u}\) can not be the null vector because of previous item.

  4. Because of the first item elevated to the square, \( \overrightarrow{u}\cdot\overrightarrow{u}=\left\| \overrightarrow{u} \right\|^2\) .

    Moreover, \( (-\overrightarrow{u})\cdot\overrightarrow{u} =(-x)x+(-y)y=-(x^{2}+y^{2}) =-\left\| \overrightarrow{u} \right\|^2\) .

    And \( \overrightarrow{u}\cdot(-\overrightarrow{u}) =x(-x)+y(-y)=-(x^{2}+y^{2}) =-\left\| \overrightarrow{u} \right\|^2\)

  5. \( \left\| \lambda\overrightarrow{u} \right\| =\sqrt{(\lambda x)^{2}+(\lambda y)^{2}} =\sqrt{\lambda^{2}}\sqrt{x^{2}y^{2}} =\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) .

4.4 Remarkable identities

Theorem 9

Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.

Then the following identities hold:

  1. \( \left\| \overrightarrow{u}+\overrightarrow{v} \right\|^2 =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 +2\overrightarrow{u}\cdot\overrightarrow{v}\)

  2. \( \left\| \overrightarrow{u}-\overrightarrow{v} \right\|^2 =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 -2\overrightarrow{u}\cdot\overrightarrow{v}\)

  3. \( (\overrightarrow{u}+\overrightarrow{v}).(\overrightarrow{u}-\overrightarrow{v} ) =\left\| \overrightarrow{u} \right\|^2-\left\| \overrightarrow{v} \right\|^2\)

Let’s remind the following lemma, that state the analog remarkable identities in \( \mathbb{R}\) .

Lemma 1

Assume \( (x,y)\in\mathbb{R}^{2}\) are real numbers.

Then the following identities hold:

  1. \( (x+y)^{2}=x^{2}+y^{2}+2xy\)

  2. \( (x-y)^{2}=x^{2}+y^{2}-2xy\)

  3. \( (x+y)(x-y)=x^{2}-y^{2}\)

These identities are directly deduced from the distributivity of the multiplication on the addition and subtraction in \( \mathbb{R}\) .

Proof (of theorem 9)

Assume \( (x_1,y_1,x_2,y_2)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x_1\\y_1\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}x_2\\y_2\end{bmatrix}\)

Then we may apply the lemma 1 to the following calculations:

  1. \( \left\| \overrightarrow{u}+\overrightarrow{v} \right\|^2=(x_{1}+x_{2})^{2}+(y_{1}+y_{2})^{2} =x_{1}^{2}+x_{2}^{2}+2x_{1}x_{2}+y_{1}^{2}+y_{2}^{2}+2y_{1}y_{2}\)

    \( =x_{1}^{2}+y_{1}^{2}+x_{2}^{2}+y_{2}^{2}+2(x_{1}x_{2}+y_{1}y_{2}) =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 +2\overrightarrow{u}\cdot\overrightarrow{v}\)

  2. \( \left\| \overrightarrow{u}-\overrightarrow{v} \right\|^2=(x_{1}+x_{2})^{2}-(y_{1}+y_{2})^{2} =x_{1}^{2}+x_{2}^{2}-2x_{1}x_{2}+y_{1}^{2}+y_{2}^{2}-2y_{1}y_{2}\)

    \( =x_{1}^{2}+y_{1}^{2}+x_{2}^{2}+y_{2}^{2}-2(x_{1}x_{2}+y_{1}y_{2}) =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 -2\overrightarrow{u}\cdot\overrightarrow{v}\)

  3. \( (\overrightarrow{u}+\overrightarrow{v}).(\overrightarrow{u}-\overrightarrow{v} ) =(x_{1}+x_{2})(x_{1}-x_{2})+(y_{1}+y_{2})(y_{1}-y_{2})\)

    \( =x_{1}^{2}-x_{2}^{2}+y_{1}^{2}-y_{2}^{2} =x_{1}^{2}+y_{1}^{2}-(x_{2}^{2}+y_{2}^{2}) =\left\| \overrightarrow{u} \right\|^2-\left\| \overrightarrow{v} \right\|^2\)

5 The Norm of a Vector as its Length

The geometrical meaning of the norm of a vector is that it may be seen as its length.

5.1 The Pythagorean Theorem

We remind here the Pythagorean theorem in a rectangle triangle.

The Pythagorean Theorem
Figure 5. The Pythagorean Theorem

Theorem 10

Consider a triangle \( (ABC)\) rectangle in \( C\) , and denote:

  1. \( a=BC\) the length of the side \( (BC)\) of the right angle,

  2. \( b=AC\) the length of the side \( (AC)\) of the right angle,

  3. and \( c=AB\) the length of the hypotenuse \( (AB)\) .

Then \( c^2=a^2+b^2\) .

This is enonciated the following way:

In a rectangle triangle, the square of the hypotenuse is the sum of the squares of the two other sides.

5.2 The norm of a vector is its length

5.2.1 First quadrant of the plane

The norm of a vector is its length: first quadrant of the plane
Figure 6. The norm of a vector is its length: first quadrant of the plane

Assume \( (x,y)\in{\mathbb{R}_+^*}^2\) are real numbers such as \( x>0\) and \( y>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the first quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.

Then the following assertions hold:

  • The triangle \( (OPM)\) is rectangle in \( P\) ,

  • \( x\) is the length of the segment \( (OP)\) ,

  • \( y\) is the length of the segment \( (PM)\) ,

  • and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .

5.2.2 Second quadrant of the plane

The norm of a vector is its length: second quadrant of the plane
Figure 7. The norm of a vector is its length: second quadrant of the plane

Assume \( (x,y)\in\mathbb{R}_-^*\times\mathbb{R}_+^*\) are real numbers such as \( x<0\) and \( y>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the second quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}=\sqrt{(-x)^2+y^2}\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.

Then the following assertions hold:

  • The triangle \( (OPM)\) is rectangle in \( P\) ,

  • \( -x\) is the length of the segment \( (OP)\) ,

  • \( y\) is the length of the segment \( (PM)\) ,

  • and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .

5.2.3 Third quadrant of the plane

The norm of a vector is its length: third quadrant of the plane
Figure 8. The norm of a vector is its length: third quadrant of the plane

Assume \( (x,y)\in{\mathbb{R}_-^*}^2\) are real numbers such as \( x<0\) and \( y<0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the third quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}=\sqrt{(-x)^2+(-y)^2}\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.

Then the following assertions hold:

  • The triangle \( (OPM)\) is rectangle in \( P\) ,

  • \( -x\) is the length of the segment \( (OP)\) ,

  • \( -y\) is the length of the segment \( (PM)\) ,

  • and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .

5.2.4 Fourth quadrant of the plane

The norm of a vector is its length: fourth quadrant of the plane
Figure 9. The norm of a vector is its length: fourth quadrant of the plane

Assume \( (x,y)\in\mathbb{R}_+^*\times\mathbb{R}_-^*\) are real numbers such as \( x>0\) and \( y<0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the fourth quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}=\sqrt{x^2+(-y)^2}\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.

Then the following assertions hold:

  • The triangle \( (OPM)\) is rectangle in \( P\) ,

  • \( x\) is the length of the segment \( (OP)\) ,

  • \( -y\) is the length of the segment \( (PM)\) ,

  • and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .

5.2.5 Positively along the \( x\) axis

The norm of a vector is its length, positively along the x) axis
Figure 10. The norm of a vector is its length, positively along the \( x\) axis

Assume \( x\in\mathbb{R}_+^*\) is a real number such as \( x>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\0\end{bmatrix}\) positively along the \( x\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+0^2}=x\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector.

Then the following assertion holds:

  • \( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .

5.2.6 Positively along the \( y\) axis

The norm of a vector is its length, positively along the y) axis
Figure 11. The norm of a vector is its length, positively along the \( y\) axis

Assume \( y\in\mathbb{R}_+^*\) is a real number such as \( y>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}0\\y\end{bmatrix}\) positively along the \( y\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{0^2+y^2}=y\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector.

Then the following assertion holds:

  • \( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .

5.2.7 Negatively along the \( x\) axis

The norm of a vector is its length, negatively along the x) axis
Figure 12. The norm of a vector is its length, negatively along the \( x\) axis

Assume \( x\in\mathbb{R}_-^*\) is a real number such as \( x<0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\0\end{bmatrix}\) negatively along the \( x\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+0^2}=-x\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector.

Then the following assertion holds:

  • \( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .

5.2.8 Negatively along the \( y\) axis

The norm of a vector is its length, negatively along the y) axis
Figure 13. The norm of a vector is its length, negatively along the \( y\) axis

\( \overrightarrow{u}=\begin{bmatrix}0\\y\end{bmatrix}\) negatively along the \( y\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{0^2+y^2}=-y\) .

Denote \( O\) the origin of the vector, \( M\) the end of the vector.

Then the following assertion holds:

  • \( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .

5.2.9 The null vector

Consider the null vector\( \overrightarrow{0}=\begin{bmatrix}0\\0\end{bmatrix}\) and its norm \( \left\| \overrightarrow{0} \right\|=0\) .

Then the following assertion holds:

  • The norm \( \left\| \overrightarrow{0} \right\|\) of the null vector \( \overrightarrow{0}\) is its length \( 0\) .

5.2.10 Conclusion

We have seen that, in any case:

The norm of a vector is its length.

5.3 Effect of the multiplication of a non-zero vector by a scalar on its length

5.3.1 Multiplication by the scalar \( 1\)

Effect of the multiplication of a non-zero vector by the scalar 1) on its length
Figure 14. Effect of the multiplication of a non-zero vector by the scalar \( 1\) on its length

Theorem 11

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( 1\) :

  1. We obtain the vector \( \overrightarrow{u}\) : \( 1.\overrightarrow{u}=\overrightarrow{u}\) .

  2. The resulting vector has the same length as \( \overrightarrow{u}\) : \( \left\| 1.\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .

Proof

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .

Then we already know that \( 1.\overrightarrow{u}=\overrightarrow{u}\) , from which we deduce directly that \( \left\| 1.\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .

5.3.2 Multiplication by a scalar strictly greater than \( 1\)

Effect of the multiplication of a non-zero vector by a scalar strictly greater than 1) on its length
Figure 15. Effect of the multiplication of a non-zero vector by a scalar strictly greater than \( 1\) on its length

Theorem 12

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(1,+\infty)\) is a real number such as \( \lambda>1\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::

  1. \( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) ,

  2. and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) .

Proof

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(1,+\infty)\) is a real number such as \( \lambda>1\) .

Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .

Then we already know that \( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) .

Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=\lambda>1\) (because \( \lambda>1>0\) ).

Consequently, \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) .

5.3.3 Multiplication by a scalar strictly comprised berween \( 0\) and \( 1\)

Effect of the multiplication of a non-zero vector by a scalar strictly comprised berween 0) and 1) on its length
Figure 16. Effect of the multiplication of a non-zero vector by a scalar strictly comprised berween \( 0\) and \( 1\) on its length

Theorem 13

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(0,1)\) is a real number such as \( 0<\lambda<1\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::

  1. \( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) ,

  2. and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) .

Proof

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(0,1)\) is a real number such as \( 0<\lambda<1\) .

Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .

Then we already know that \( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) .

Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=\lambda\) strictly between \( 0\) and \( 1\) (because \( \lambda>0\) ).

Consequently, \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) .

5.3.4 Multiplication by the scalar \( -1\)

Effect of the multiplication of a non-zero vector by the scalar -1) on its length
Figure 17. Effect of the multiplication of a non-zero vector by the scalar \( -1\) on its length

Theorem 14

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( -1\) :

  1. We obtain the opposite \( -\overrightarrow{u}\) of the vector \( \overrightarrow{u}\) : \( (-1).\overrightarrow{u}=-\overrightarrow{u}\) .

  2. The resulting vector has the same length as \( \overrightarrow{u}\) : \( \left\| (-1).\overrightarrow{u} \right\|=\left\| -\overrightarrow{u} \right\| =\left\| \overrightarrow{u} \right\|\) .

Lemma 2

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements.

Then \( \left\| -\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .

That lemma is a direct consequence of the fact that, for any real number \( x\) , \( (-x)^{2}=x^{2}\) .

Proof (of the therorem 17)

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .

Then we already know that \( (-1).\overrightarrow{u}=-\overrightarrow{u}\) , from which we deduce, with the lemma 2, that \( \left\| (-1).\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .

5.3.5 Multiplication by a scalar strictly less than \( -1\)

Effect of the multiplication of a non-zero vector by a scalar strictly less than -1) on its length
Figure 18. Effect of the multiplication of a non-zero vector by a scalar strictly less than \( -1\) on its length

Theorem 15

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-\infty,-1)\) is a real number such as \( \lambda<-1\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::

  1. \( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) ,

  2. and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) .

Proof

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-\infty,-1)\) is a real number such as \( \lambda<-1\) .

Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .

Then we already know that \( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) .

Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=-\lambda>1\) (because \( \lambda<-1<0\) ).

Consequently, \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) .

5.3.6 Multiplication by a scalar strictly comprised berween \( -1\) and \( 0\)

Effect of the multiplication of a non-zero vector by a scalar strictly comprised berween -1) and 0) on its length
Figure 19. Effect of the multiplication of a non-zero vector by a scalar strictly comprised berween \( -1\) and \( 0\) on its length

Theorem 16

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-1,0)\) is a real number such as \( -1<\lambda<0\) .

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::

  1. \( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) ,

  2. and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) .

Proof

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-1,0)\) is a real number such as \( -1<\lambda<0\) .

Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .

Then we already know that \( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) .

Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=-\lambda\) strictly between \( 0\) and \( 1\) (because \( -1<\lambda<0\) ).

Consequently, \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) .

5.3.7 Multiplication by the scalar \( 0\)

Theorem 17

Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements.

Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( 0\) :

  1. We obtain the null vector \( \overrightarrow{u}\) : \( 0.\overrightarrow{u}=\overrightarrow{0}\) .

  2. The resulting (null) vector has a length \( 0\) : \( \left\| 0.\overrightarrow{u} \right\|=0\) .

That theorem repeats elements we already know.

6 Conclusion

We built the euclidean plane \( \mathbb{P}\) with the following elements:

  • The dot product, at the root of the orthogonality of vectors.

  • The norm, that measures the length of vectors.