If you see this, something is wrong
To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.
Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.
Generally speaking, anything that is blue is clickable.
Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.
Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.
Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.
By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.
If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.
The blue button below that says "table of contents" is your tool to navigate in a publication.
The left arrow brings you to the previous document in the publication, and the right one brings you to the next. Both cycle over the publication list.
The middle button that says "table of contents" reveals the publication table of contents. This table is hierarchical structured. It has sections, and sections can be collapsed or expanded. If you are a registered user, you can save the layout of the table of contents.
First published on Sunday, Jun 30, 2024 and last modified on Thursday, Apr 10, 2025
Mathedu SAS
In that document, we introduce two inter-related notions for vectors in the plane \( \mathbb{P}\) , the dot product of two vectors and the norm of a vector.
Definition 1
Assume \( (x_1,y_1,x_2,y_2)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements: \( \overrightarrow{u}_1=\begin{bmatrix}x_1\\y_1\end{bmatrix}\) and \( \overrightarrow{u}_2=\begin{bmatrix}x_2\\y_2\end{bmatrix}\) .
Then the dot product of \( \overrightarrow{u}_1\) and \( \overrightarrow{u}_2\) is defined the following way:
(1)
Theorem 1
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) with 2 elements.
Consider the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .
Then the following assertions hold:
\( \overrightarrow{u}\cdot\overrightarrow{i}=x\) , the abscissa of \( \overrightarrow{u}\) ,
\( \overrightarrow{u}\cdot\overrightarrow{j}=y\) , the ordinate of \( \overrightarrow{u}\) ,
\( \overrightarrow{i}\cdot\overrightarrow{j}=0\) , which means that \( \overrightarrow{i}\) and \( \overrightarrow{j}\) are orthogonal (see next section).
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) with 2 elements.
Consider the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .
Then the following calculations may be done:
\( \overrightarrow{u}\cdot\overrightarrow{i} =x\times 1+y\times 0 =x\)
\( \overrightarrow{u}\cdot\overrightarrow{j} =x\times 0+y\times 1 =y\)
\( \overrightarrow{i}\cdot\overrightarrow{j} =0\times 1+1\times 0 =0\)
Theorem 2
Assume \( (\overrightarrow{u},\overrightarrow{v},\overrightarrow{w})\in\mathbb{P}^3\) are column vectors with two real elements, and \( \alpha\in\mathbb{R}\) is a real number.
Then the following assertions hold, stating that the dot product is a symmetrical bilinear form:
\( \overrightarrow{u}\cdot\overrightarrow{v}=\overrightarrow{v}\cdot\overrightarrow{u}\) ,
\( (\overrightarrow{u}+\overrightarrow{v})\cdot\overrightarrow{w} =\overrightarrow{u}\cdot\overrightarrow{w}+\overrightarrow{v}\cdot\overrightarrow{w}\) ,
\( \overrightarrow{w}\cdot(\overrightarrow{u}+\overrightarrow{v}) =\overrightarrow{w}\cdot\overrightarrow{u}+\overrightarrow{w}\cdot\overrightarrow{v}\) ,
\( (\alpha\overrightarrow{u})\cdot\overrightarrow{v}=\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\) ,
and \( \overrightarrow{u}\cdot(\alpha\overrightarrow{v})=\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\) ,
Proof
Assume \( (x_1,y_1,x_2,y_2,x_{3},y_{3})\in\mathbb{R}^6\) are real numbers, and consider the three column vectors with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x_1\\y_1\end{bmatrix}\) , \( \overrightarrow{v}=\begin{bmatrix}x_2\\y_2\end{bmatrix}\) and \( \overrightarrow{w}=\begin{bmatrix}x_3\\y_3\end{bmatrix}\) .
Assume \( \alpha\in\mathbb{R}\) is a real number.
Then the following calculations may be done:
Because the multiplication in \( \mathbb{R}\) is commutative, we have:
\( \overrightarrow{u}\cdot\overrightarrow{v} =x_{1}x_{2}+y_{1}y_{2} =x_{2}x_{1}+y_{2}y_{1} =\overrightarrow{v}\cdot\overrightarrow{u}\)
Because, in \( \mathbb{R}\) , the multiplication is distributive on the addition, we have:
\( (\overrightarrow{u}+\overrightarrow{v})\cdot\overrightarrow{w} =(x_{1}+x_{2})x_{3}+ (y_{1}+y_{2})y_{3} =x_{1}x_{3}+x_{2}x_{3}+ y_{1}y_{3}+y_{2}y_{3} =\overrightarrow{u}\cdot\overrightarrow{w}+\overrightarrow{v}.\overrightarrow{w}\)
Because, in \( \mathbb{R}\) , the multiplication is distributive on the addition, we have:
\( \overrightarrow{w}\cdot(\overrightarrow{u}+\overrightarrow{v}) =x_{3}(x_{1}+x_{2})+y_{3}(y_{1}+y_{2}) =x_{3}x_{1}+x_{3}x_{2}+y_{3}y_{1}+y_{3}y_{2} =\overrightarrow{w}\cdot\overrightarrow{u}+\overrightarrow{w}\cdot\overrightarrow{v}\)
Because the multiplication in \( \mathbb{R}\) is associative, we have:
\( (\alpha\overrightarrow{u})\cdot\overrightarrow{v} =(\alpha x_{1})x_{2}+ (\alpha y_{1})y_{2} =\alpha (x_{1}x_{2})+ \alpha (y_{1}y_{2}) =\alpha(x_{1}x_{2}+ y_{1}y_{2}) =\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\)
Because the multiplication in \( \mathbb{R}\) is commutative and associative, we have:
\( \overrightarrow{u}\cdot(\alpha\overrightarrow{v}) =x_{1}(\alpha x_{2})+y_{1}(\alpha y_{2}) =\alpha (x_{1}x_{2})+\alpha (y_{1}y_{2}) =\alpha(x_{1}x_{2}+y_{1}y_{2}) =\alpha(\overrightarrow{u}\cdot\overrightarrow{v})\)
Definition 2
Because of the following facts:
then we say that:
The vector plane \( \mathbb{P}\) is an euclidean plane.
That justifies the fact that we already called it so.
The orthogonality of vectors is a major application of the dot product.
Definition 3
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.
Then \( \overrightarrow{u}\) and \( \overrightarrow{v}\) are said to be orthogonal if and only if their dot product \( \overrightarrow{u}\cdot\overrightarrow{v}\) is equal to \( 0\) . This is denoted \( \overrightarrow{u}\bot\overrightarrow{v}\) .
If any two vectors are orthogonal, then their graphical representations are perpendicular, and they make a right angle.
Theorem 3
Consider the canonical base of the euclidean plane \( \mathbb{P}\) : \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .
Then \( \overrightarrow{i}\bot\overrightarrow{j}\) .
Proof
This is because \( \overrightarrow{i}\cdot\overrightarrow{j}=0\) .
That is why they are represented as perpendicular to each other, or making a right angle on the drawings.
First example Here is another example of orthogonal vectors.
Theorem 4
Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{1}=\begin{bmatrix}1\\1\end{bmatrix}\) and
\( \overrightarrow{v}_{1}=\begin{bmatrix}-1\\1\end{bmatrix}\) .
Then \( \overrightarrow{u}_{1}\bot\overrightarrow{v}_{1}\) .
Proof
Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{1}=\begin{bmatrix}1\\1\end{bmatrix}\) and
\( \overrightarrow{v}_{1}=\begin{bmatrix}-1\\1\end{bmatrix}\) .
Then \( \overrightarrow{u}_{1}\cdot\overrightarrow{v}_{1} = 1\times(-1)+1\times 1=(-1)+1=0\) .
Second example Here is another example of orthogonal vectors.
Theorem 5
Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{2}=\begin{bmatrix}1\\2\end{bmatrix}\) and
\( \overrightarrow{v}_{2}=\begin{bmatrix}2\\{-1}\end{bmatrix}\) .
Then \( \overrightarrow{u}_{2}\bot\overrightarrow{v}_{2}\) .
Proof
Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{2}=\begin{bmatrix}1\\2\end{bmatrix}\) and
\( \overrightarrow{v}_{2}=\begin{bmatrix}2\\{-1}\end{bmatrix}\) .
Then \( \overrightarrow{u}_{2}\cdot\overrightarrow{v}_{2} = 1\times 2+2\times(-1)=2+(-2)=0\) .
Third example Here is another example of orthogonal vectors.
Theorem 6
Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{3}=\begin{bmatrix}1\\{-2}\end{bmatrix}\) and
\( \overrightarrow{v}_{3}=\begin{bmatrix}-2\\{-1}\end{bmatrix}\) .
Then \( \overrightarrow{u}_{3}\bot\overrightarrow{3}_{2}\) .
Proof
Consider the two following vectors in the euclidean plane \( \mathbb{P}\) :
\( \overrightarrow{u}_{3}=\begin{bmatrix}1\\{-2}\end{bmatrix}\) and
\( \overrightarrow{v}_{3}=\begin{bmatrix}-2\\{-1}\end{bmatrix}\) .
Then \( \overrightarrow{u}_{3}\cdot\overrightarrow{v}_{3} = 1\times (-2)+(-2)\times(-1)=(-2)+2=0\) .
We define the norm of a vector as a function of its coordinates, but it is a direct function of the dot product of the vector by itself.
As a consequence, the euclidean plane \( \mathbb{P}\) has a dot product and a norm in addition of the addition to vectors and the external product of a vector by a scalar.
Definition 4
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) .
Then the norm of the vector \( \overrightarrow{u}\) is defined the following way:
(2)
Theorem 7
Consider the null vector and the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{0}=\begin{bmatrix}0\\0\end{bmatrix}\) , \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .
Then the following assertions hold:
The norm of the null vector is \( \left\|\overrightarrow{0} \right\|=0\) .
The norm of the vector \( \overrightarrow{i}\) is \( \left\|\overrightarrow{i} \right\|=1\) .
The norm of the vector \( \overrightarrow{j}\) is \( \left\|\overrightarrow{j} \right\|=1\) as well.
Proof
Consider the null vector and the canonical base of the vector plane \( \mathbb{P}\) : \( \overrightarrow{0}=\begin{bmatrix}0\\0\end{bmatrix}\) , \( \overrightarrow{i}=\begin{bmatrix}1\\0\end{bmatrix}\) and \( \overrightarrow{j}=\begin{bmatrix}0\\1\end{bmatrix}\) .
Then the following calculations may be performed:
\( \left\|\overrightarrow{0}\right\|=\sqrt{0^2+0^2}=\sqrt{0}=0\) .
\( \left\|\overrightarrow{i}\right\|=\sqrt{1^{2}+0^{2}}=\sqrt{1}=1\) .
\( \left\|\overrightarrow{j}\right\|=\sqrt{0^{2}+1^{2}}=\sqrt{1}=1\) .
Definition 5
Because of the following facts:
\( \overrightarrow{i}\bot\overrightarrow{j}\) ,
and \( \left\|\overrightarrow{i} \right\|=\left\|\overrightarrow{j} \right\|=1\) ,
then we say that:
The canonical base \( (\overrightarrow{i},\overrightarrow{j})\) is an orthonormal base of the euclidean plane \( \mathbb{P}\) .
Theorem 8
Assume \( \overrightarrow{u}\in\mathbb{P}\) is a column vector with two real elements and \( \lambda\in\mathbb{R}\) is a real number.
Then the following assertions hold:
The norm of a vector is based on the dot product of the vector with itself: \( \left\| \overrightarrow{u} \right\|=\sqrt{\overrightarrow{u}\cdot\overrightarrow{u}}\) .
\( \left\| \overrightarrow{u} \right\|=0\) if and only if \( \overrightarrow{u}=\overrightarrow{0}\) .
So that, if \( \overrightarrow{u}\neq\overrightarrow{0}\) then \( \overrightarrow{u}\neq\overrightarrow{0}\) .
\( \overrightarrow{u}.\overrightarrow{u}=\left\| \overrightarrow{u} \right\|^2\) and \( (-\overrightarrow{u})\cdot\overrightarrow{u}=\overrightarrow{u}.(-\overrightarrow{u}) =-\left\| \overrightarrow{u} \right\|^2\) .
Multiply a vector by a scalar \( \lambda\) multiplies its norm by the absolute value \( \left| \lambda \right| \) of \( \lambda\) : \( \left\| \lambda\overrightarrow{u} \right\| =\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) .
Proof
Assume \( (x,y)\in\mathbb{R}^2\) are real numbers, and consider the column vector with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) .
Then the following calculations may be performed.
\( \sqrt{\overrightarrow{u}\cdot\overrightarrow{u}}=\sqrt{x^{2}+y^{2}}=\left\| \overrightarrow{u} \right\|\)
If \( \overrightarrow{u}=\overrightarrow{0}\) , then \( \left\| \overrightarrow{u} \right\|=0\) .
Reversely, if \( \left\| \overrightarrow{u} \right\|=0\) , then \( x^{2}+y^{2}=0\) , with \( x^{2}\geq 0\) and \( y^{2}\geq 0\) , which leads to \( x=y=0\) , or \( \overrightarrow{u}=\overrightarrow{0}\) .
If \( \overrightarrow{u}\neq\overrightarrow{0}\) , then \( \overrightarrow{u}\) can not be the null vector because of previous item.
Because of the first item elevated to the square, \( \overrightarrow{u}\cdot\overrightarrow{u}=\left\| \overrightarrow{u} \right\|^2\) .
Moreover, \( (-\overrightarrow{u})\cdot\overrightarrow{u} =(-x)x+(-y)y=-(x^{2}+y^{2}) =-\left\| \overrightarrow{u} \right\|^2\) .
And \( \overrightarrow{u}\cdot(-\overrightarrow{u}) =x(-x)+y(-y)=-(x^{2}+y^{2}) =-\left\| \overrightarrow{u} \right\|^2\)
\( \left\| \lambda\overrightarrow{u} \right\| =\sqrt{(\lambda x)^{2}+(\lambda y)^{2}} =\sqrt{\lambda^{2}}\sqrt{x^{2}y^{2}} =\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) .
Theorem 9
Assume \( (\overrightarrow{u},\overrightarrow{v})\in\mathbb{P}^2\) are column vectors with two real elements.
Then the following identities hold:
\( \left\| \overrightarrow{u}+\overrightarrow{v} \right\|^2 =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 +2\overrightarrow{u}\cdot\overrightarrow{v}\)
\( \left\| \overrightarrow{u}-\overrightarrow{v} \right\|^2 =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 -2\overrightarrow{u}\cdot\overrightarrow{v}\)
\( (\overrightarrow{u}+\overrightarrow{v}).(\overrightarrow{u}-\overrightarrow{v} ) =\left\| \overrightarrow{u} \right\|^2-\left\| \overrightarrow{v} \right\|^2\)
Let’s remind the following lemma, that state the analog remarkable identities in \( \mathbb{R}\) .
Lemma 1
Assume \( (x,y)\in\mathbb{R}^{2}\) are real numbers.
Then the following identities hold:
\( (x+y)^{2}=x^{2}+y^{2}+2xy\)
\( (x-y)^{2}=x^{2}+y^{2}-2xy\)
\( (x+y)(x-y)=x^{2}-y^{2}\)
These identities are directly deduced from the distributivity of the multiplication on the addition and subtraction in \( \mathbb{R}\) .
Proof (of theorem 9)
Assume \( (x_1,y_1,x_2,y_2)\in\mathbb{R}^4\) are real numbers, and consider the two column vectors with 2 elements: \( \overrightarrow{u}=\begin{bmatrix}x_1\\y_1\end{bmatrix}\) and \( \overrightarrow{v}=\begin{bmatrix}x_2\\y_2\end{bmatrix}\)
Then we may apply the lemma 1 to the following calculations:
\( \left\| \overrightarrow{u}+\overrightarrow{v} \right\|^2=(x_{1}+x_{2})^{2}+(y_{1}+y_{2})^{2} =x_{1}^{2}+x_{2}^{2}+2x_{1}x_{2}+y_{1}^{2}+y_{2}^{2}+2y_{1}y_{2}\)
\( =x_{1}^{2}+y_{1}^{2}+x_{2}^{2}+y_{2}^{2}+2(x_{1}x_{2}+y_{1}y_{2}) =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 +2\overrightarrow{u}\cdot\overrightarrow{v}\)
\( \left\| \overrightarrow{u}-\overrightarrow{v} \right\|^2=(x_{1}+x_{2})^{2}-(y_{1}+y_{2})^{2} =x_{1}^{2}+x_{2}^{2}-2x_{1}x_{2}+y_{1}^{2}+y_{2}^{2}-2y_{1}y_{2}\)
\( =x_{1}^{2}+y_{1}^{2}+x_{2}^{2}+y_{2}^{2}-2(x_{1}x_{2}+y_{1}y_{2}) =\left\| \overrightarrow{u} \right\|^2 +\left\| \overrightarrow{v} \right\|^2 -2\overrightarrow{u}\cdot\overrightarrow{v}\)
\( (\overrightarrow{u}+\overrightarrow{v}).(\overrightarrow{u}-\overrightarrow{v} ) =(x_{1}+x_{2})(x_{1}-x_{2})+(y_{1}+y_{2})(y_{1}-y_{2})\)
\( =x_{1}^{2}-x_{2}^{2}+y_{1}^{2}-y_{2}^{2} =x_{1}^{2}+y_{1}^{2}-(x_{2}^{2}+y_{2}^{2}) =\left\| \overrightarrow{u} \right\|^2-\left\| \overrightarrow{v} \right\|^2\)
The geometrical meaning of the norm of a vector is that it may be seen as its length.
We remind here the Pythagorean theorem in a rectangle triangle.
Theorem 10
Consider a triangle \( (ABC)\) rectangle in \( C\) , and denote:
\( a=BC\) the length of the side \( (BC)\) of the right angle,
\( b=AC\) the length of the side \( (AC)\) of the right angle,
and \( c=AB\) the length of the hypotenuse \( (AB)\) .
Then \( c^2=a^2+b^2\) .
This is enonciated the following way:
In a rectangle triangle, the square of the hypotenuse is the sum of the squares of the two other sides.
Assume \( (x,y)\in{\mathbb{R}_+^*}^2\) are real numbers such as \( x>0\) and \( y>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the first quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.
Then the following assertions hold:
The triangle \( (OPM)\) is rectangle in \( P\) ,
\( x\) is the length of the segment \( (OP)\) ,
\( y\) is the length of the segment \( (PM)\) ,
and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .
Assume \( (x,y)\in\mathbb{R}_-^*\times\mathbb{R}_+^*\) are real numbers such as \( x<0\) and \( y>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the second quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}=\sqrt{(-x)^2+y^2}\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.
Then the following assertions hold:
The triangle \( (OPM)\) is rectangle in \( P\) ,
\( -x\) is the length of the segment \( (OP)\) ,
\( y\) is the length of the segment \( (PM)\) ,
and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .
Assume \( (x,y)\in{\mathbb{R}_-^*}^2\) are real numbers such as \( x<0\) and \( y<0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the third quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}=\sqrt{(-x)^2+(-y)^2}\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.
Then the following assertions hold:
The triangle \( (OPM)\) is rectangle in \( P\) ,
\( -x\) is the length of the segment \( (OP)\) ,
\( -y\) is the length of the segment \( (PM)\) ,
and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .
Assume \( (x,y)\in\mathbb{R}_+^*\times\mathbb{R}_-^*\) are real numbers such as \( x>0\) and \( y<0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\y\end{bmatrix}\) in the fourth quadrant of the plane and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+y^2}=\sqrt{x^2+(-y)^2}\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector, and \( P\) the orthogonal projection of \( M\) on the \( x\) axis.
Then the following assertions hold:
The triangle \( (OPM)\) is rectangle in \( P\) ,
\( x\) is the length of the segment \( (OP)\) ,
\( -y\) is the length of the segment \( (PM)\) ,
and, because of the Pythagorean theorem, \( \left\| \overrightarrow{u} \right\|\) is the length of the hypotenuse \( (OM)\) , that is the length of the vector \( \overrightarrow{u}\) .
Assume \( x\in\mathbb{R}_+^*\) is a real number such as \( x>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\0\end{bmatrix}\) positively along the \( x\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+0^2}=x\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector.
Then the following assertion holds:
\( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .
Assume \( y\in\mathbb{R}_+^*\) is a real number such as \( y>0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}0\\y\end{bmatrix}\) positively along the \( y\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{0^2+y^2}=y\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector.
Then the following assertion holds:
\( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .
Assume \( x\in\mathbb{R}_-^*\) is a real number such as \( x<0\) , and consider the column vector \( \overrightarrow{u}=\begin{bmatrix}x\\0\end{bmatrix}\) negatively along the \( x\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{x^2+0^2}=-x\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector.
Then the following assertion holds:
\( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .
\( \overrightarrow{u}=\begin{bmatrix}0\\y\end{bmatrix}\) negatively along the \( y\) axis and its norm \( \left\| \overrightarrow{u} \right\|=\sqrt{0^2+y^2}=-y\) .
Denote \( O\) the origin of the vector, \( M\) the end of the vector.
Then the following assertion holds:
\( \left\| \overrightarrow{u} \right\|\) is the length of the vector \( \overrightarrow{u}\) .
Consider the null vector\( \overrightarrow{0}=\begin{bmatrix}0\\0\end{bmatrix}\) and its norm \( \left\| \overrightarrow{0} \right\|=0\) .
Then the following assertion holds:
The norm \( \left\| \overrightarrow{0} \right\|\) of the null vector \( \overrightarrow{0}\) is its length \( 0\) .
We have seen that, in any case:
The norm of a vector is its length.
Theorem 11
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( 1\) :
We obtain the vector \( \overrightarrow{u}\) : \( 1.\overrightarrow{u}=\overrightarrow{u}\) .
The resulting vector has the same length as \( \overrightarrow{u}\) : \( \left\| 1.\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .
Proof
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .
Then we already know that \( 1.\overrightarrow{u}=\overrightarrow{u}\) , from which we deduce directly that \( \left\| 1.\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .
Theorem 12
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(1,+\infty)\) is a real number such as \( \lambda>1\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::
\( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) .
Proof
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(1,+\infty)\) is a real number such as \( \lambda>1\) .
Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .
Then we already know that \( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) .
Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=\lambda>1\) (because \( \lambda>1>0\) ).
Consequently, \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) .
Theorem 13
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(0,1)\) is a real number such as \( 0<\lambda<1\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::
\( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) .
Proof
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(0,1)\) is a real number such as \( 0<\lambda<1\) .
Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .
Then we already know that \( \overrightarrow{v}\) is positively aligned with \( \overrightarrow{u}\) .
Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=\lambda\) strictly between \( 0\) and \( 1\) (because \( \lambda>0\) ).
Consequently, \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) .
Theorem 14
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( -1\) :
We obtain the opposite \( -\overrightarrow{u}\) of the vector \( \overrightarrow{u}\) : \( (-1).\overrightarrow{u}=-\overrightarrow{u}\) .
The resulting vector has the same length as \( \overrightarrow{u}\) : \( \left\| (-1).\overrightarrow{u} \right\|=\left\| -\overrightarrow{u} \right\| =\left\| \overrightarrow{u} \right\|\) .
Lemma 2
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements.
Then \( \left\| -\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .
That lemma is a direct consequence of the fact that, for any real number \( x\) , \( (-x)^{2}=x^{2}\) .
Proof (of the therorem 17)
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) .
Then we already know that \( (-1).\overrightarrow{u}=-\overrightarrow{u}\) , from which we deduce, with the lemma 2, that \( \left\| (-1).\overrightarrow{u} \right\|=\left\| \overrightarrow{u} \right\|\) .
Theorem 15
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-\infty,-1)\) is a real number such as \( \lambda<-1\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::
\( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) .
Proof
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-\infty,-1)\) is a real number such as \( \lambda<-1\) .
Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .
Then we already know that \( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) .
Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=-\lambda>1\) (because \( \lambda<-1<0\) ).
Consequently, \( \left\| \lambda\overrightarrow{u} \right\|>\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is longer than \( \overrightarrow{u}\) .
Theorem 16
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-1,0)\) is a real number such as \( -1<\lambda<0\) .
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( \lambda\) , we obtain a vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) such as::
\( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) ,
and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) : \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) .
Proof
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements such as \( \overrightarrow{u}\neq\overrightarrow{0}\) , and \( \lambda\in(-1,0)\) is a real number such as \( -1<\lambda<0\) .
Consider the vector \( \overrightarrow{v}=\lambda\overrightarrow{u}\) .
Then we already know that \( \overrightarrow{v}\) is negatively aligned with \( \overrightarrow{u}\) .
Moreover, because of the item (I) of the theorem 8, we have: \( \left\| \lambda\overrightarrow{u} \right\|=\left| \lambda \right| \left\| \overrightarrow{u} \right\|\) , with \( \left| \lambda \right|=-\lambda\) strictly between \( 0\) and \( 1\) (because \( -1<\lambda<0\) ).
Consequently, \( \left\| \lambda\overrightarrow{u} \right\|<\left\| \overrightarrow{u} \right\|\) , and \( \overrightarrow{v}\) is shorter than \( \overrightarrow{u}\) .
Theorem 17
Assume \( \overrightarrow{u}\in\mathbb{P}^*\) is a column vector with two real elements.
Then, if we multiply the vector \( \overrightarrow{u}\) by the scalar \( 0\) :
We obtain the null vector \( \overrightarrow{u}\) : \( 0.\overrightarrow{u}=\overrightarrow{0}\) .
The resulting (null) vector has a length \( 0\) : \( \left\| 0.\overrightarrow{u} \right\|=0\) .
That theorem repeats elements we already know.
We built the euclidean plane \( \mathbb{P}\) with the following elements:
The dot product, at the root of the orthogonality of vectors.
The norm, that measures the length of vectors.