[ad_1]

## Vectors

**Vectors **are used in many areas of data science most commonly to **represent a feature. Multiple features combined and we get a Matrix, our dataset.**

Anything that has a **direction **and **magnitude**. A quantity having direction as well as magnitude, especially as determining the position of one point in space relative to another.

## Column Vector

All elements of vectors are written **vertically **as a column.

## Row Vector

All elements of vectors are written **horizontally** as a row.

Putting multiple Row or Column vectors together gives us a matrix.

## Zero Vector

A vector with **all elements equals to 0**. It could be a Row or a Column vector.

## Unit Vector

Any vector with a **magnitude of 1** can be described as a unit vector.

In a 2D or 3D space the unit vectors rest on X, Y or Z axis with length equal to 1 unit. These vectors are often denoted by** i^, j^ and k^**.

Unit vectors are building block for all vectors. What it means is any vector can be broken down to these unit vectors. Lets understand this with an examples.

As we can see a vector can be broken down to and build from the unit vectors.

The magnitude of a vector is the **length **of the vector. The magnitude of the vector a v is denoted as **∥v∥**.

Basically it is calculated by taking the square root of the sum of the squares of each elements of the vector.

We will learn more about it in the later sections in this article.

## Transpose

When a vector is transposed it’s **rows become columns and columns become rows.**

## Addition

**Vector to Vector**

We can only add two similar and equal types of vectors i.e. we can only add Row to Row and Column to Column vectors with equal sizes or elements.

**Visualization: **We can visualize the addition of two vectors as following sequence of vectors, one vector after the another, the point we end up gives us the output vector.

Follow vector v then follow vector w and we get our new vector v + w.

Notice how changing the sequence of vectors we follow doesn’t change the output or final vector.

**Vector to Zero Vector**

Adding a vector to a zero vector, vise versa, results in same vector.

**Scalar to Vector**

A scalar quantity cannot be added to a vector quantity because they have different dimensions.

**Note**

Sequence in which the vectors are added doesn’t change the output vector in magnitude or direction.

## Subtraction

**Vector from Vector**

Just like addition, we can only subtract two similar and equal types of vectors i.e. we can only subtract Row from Row and Column from Column vectors with equal sizes or elements.

**Visualization: **Just like we do in addition, we follow the vectors here but we change the direction for the vector we are subtracting.

Follow vector v then follow opposite direction of vector w and we get our new vector v-w.

**Zero Vector from Vector**

Subtracting a vector from a zero vector, vise versa, results in similar vector (direction could change).

**Scalar from Vector**

A scalar quantity cannot be subtracted from a vector quantity because they have different dimensions.

## Note

Sequence in which the vectors are subtracted changes only the direction of the output vector. Magnitude remains unchanged.

## Multiplication or Cross Product

## Vector and Vector

We can only multiply Row to a Column or Column to a Row vector given the vectors have same number of elements. The output of this operation can be a vector or a matrix.

Multiplication on the vectors below is not possible because they both are Row vectors although the number of elements are equal.

But we can multiple them if we can transpose one of the vectors.

And

**Visualization: **The cross product of two vectors, in 2D, gives us the area of a parallelogram and in 3D it gives us the volume of cuboid.

We cannot visualize more than 3D but computer can perform this operation in even higher dimensions.

## Scalar and Vector

Multiplication of a scalar and a vector is possible.

**Visualization**: We can visualize it as scaling, resizing, a vector by certain value.

## Division

Division is not allowed between vectors. However when it comes to scalar we can always consider the divisor as fraction and use the multiplication.

## Dot or Scalar Product

Dot or Scalar Product is an algebraic operation that takes two equal-length sequences of numbers, and returns a single number.

In cross product, as we have seen above, the output is a matrix or a vector however the output of a dot product is a scalar.

Dot product can be user to calculate the projection of one vector over other and also for the similarities between the vectors.

There are 2 methods to calculate the dot product.

## Method 1

Using the angle between the vectors.

## Visualization

Dot Product is a one of the common methods used in many areas of data science to calculate the similarities,

cosine similarity, between vectors or features and more importantly is the basis ofSVM (Support Vector Machines).In

Dot Productwe are more focused on calculating theproduct of the vectorswhile inCosine Similaritywe focus oncalculating the angle between the vectors. The smaller the angle the similar the vectors are.

For the cosine similarity we can calculate the dot product using the method below.

## Method 2

Using the elements of the vectors.

**Visualization**

**The length of the vector** is referred to as the vector norm or the vector’s magnitude. The length of a vector is a nonnegative number that describes the extent of the vector in space, and is sometimes referred to as the vector’s magnitude or the norm.

## Properties

## Scalar Multiplication

**Inequality**

## Types or Norms

## Euclidean Norm

## Lp Norm

Its a generalized equation where **p can take any nonnegative values**.

One thing to note here is we are taking the absolute value **|v|** of each elements of the vector.

Using this equation we can derive L1 and L2 Norms.

## L1 or Manhattan Norm

L1 Norm is used in two important techniques in data science, L1 Regularization and Manhattan Distance.

## L2 or Euclidean Norm

We get the equation for Euclidean Norm using Lp Norm for p= 2.

L2 or Euclidean Norm is used in data science, L2 Regularization and Euclidean Distance.

## L∞ Norm

## Example

As we can see in this norm the output is approximately the maximum value of the element in the vector.

Lets understand how different set of vectors behave under the given circumstance.

For variety of norms what is the set of vectors with norm of the vector equal to 1, ||v|| = 1.

## L1 or Manhattan Norm

Under the circumstance above, the equation of L1 Norm can be written as,

Using this we can calculate v2 for any value of v1 and plot it in the graph.

## L2 or Euclidean Norm

Under the circumstance above, the equation of L2 Norm can be written as,

Using this we can calculate v2 for any value of v1 and plot it in the graph.

## Plot L1 and L2

After we plot the L1 and L2 for different values of v1 and v2 we get a plot like below.

We will see a similar plot when we will learn about the L1 and L2 Regularization.

## Euclidean Distance

It is the length of the line segment between two points. Here we use Euclidean Geometry, like taking a flight to travel from one place to another.

## Manhattan Distance

It is the sum of absolute difference between the measures in all dimensions of two points. Here we use Taxicab Geometry, like taking a cab in downtown (Manhattan) to travel from one place to another.

I hope this article provides you with a good understanding of some important concepts of **Vectors**.

If you have any questions or if you find anything misrepresented please let me know.

Thanks!

[ad_2]

Source link