Linear Algebra and Matrix
Matrix & Determinants
Content
🌞What are Matrices?
👉Order of Matrix
👉Matrices Examples
🌞Operation on Matrices:
👉Addition of Matrices
👉Scalar Multiplication of Matrices
👉Multiplication of Matrices
👉Properties of Matrix Addition and Multiplication
👉Transpose of Matrix
👉Trace of Matrix
🌞Types of Matrices
🌞Determinant of a Matrix
🌞Inverse of a Matrix
🌞Solving Linear Equation Using Matrices
🌞Rank of a Matrix
🌞Eigen Value and Eigen Vectors of Matrices
✍️Matrices are rectangular arrays of numbers, symbols, or characters where all of these elements are arranged in each row and column. An array is a collection of items arranged at different locations.
✍️Let’s assume points are arranged in space each belonging to a specific location then an array of points is formed. This array of points is called a matrix. The items contained in a matrix are called Elements of the Matrix. Each matrix has a finite number of rows and columns and each element belongs to these rows and columns only. The number of rows and columns present in a matrix determines the order of the matrix. Let’s say a matrix has 3 rows and 2 columns then the order of the matrix is given as 3⨯2.
Matrices Definition
✍️A rectangular array of numbers, symbols, or characters is called a Matrix. Matrices are identified by their order. The order of the matrices is given in the form of a number of rows ⨯ number of columns. A matrix is represented as [P]m⨯n where P is the matrix, m is the number of rows and n is the number of columns. Matrices in maths are useful in solving numerous problems of linear equations and many more.
✍️Order of a Matrix tells about the number of rows and columns present in a matrix. Order of a matrix is represented as the number of rows times the number of columns. Let’s say if a matrix has 4 rows and 5 columns then the order of the matrix will be 4⨯5. Always remember that the first number in the order signifies the number of rows present in the matrix and the second number signifies the number of columns in the matrix.
✍️Matrices undergo various mathematical operations such as addition, subtraction, scalar multiplication, and multiplication. These operations are performed between the elements of two matrices to give an equivalent matrix that contains the elements which are obtained as a result of the operation between elements of two matrices.
✍️In addition of matrices, the elements of two matrices are added to yield a matrix that contains elements obtained as the sum of two matrices. The addition of matrices is performed between two matrices of the same order.
✍️Subtraction of Matrices is the difference between the elements of two matrices of the same order to give an equivalent matrix of the same order whose elements are equal to the difference of elements of two matrices. The subtraction of two matrices can be represented in terms of the addition of two matrices. Let’s say we have to subtract matrix B from matrix A then we can write A – B. We can also rewrite it as A + (-B). Let’s solve an example:
✍️Scalar Multiplication of matrices refers to the multiplication of each term of a matrix with a scalar term. If a scalar let’s ‘k’ is multiplied by a matrix then the equivalent matrix will contain elements equal to the product of the scalar and the element of the original matrix. Let’s see an example:
✍️In the multiplication of matrices, two matrices are multiplied to yield a single equivalent matrix. The multiplication is performed in the manner that the elements of the row of the first matrix multiply with the elements of the columns of the second matrix and the product of elements are added to yield a single element of the equivalent matrix. If a matrix [A]i⨯j is multiplied with matrix [B]j⨯k then the product is given as [AB]i⨯k.
✍️Properties followed by Multiplication and Addition of Matrices is listed below:
🌞A + B = B + A (Commutative)
🌞(A + B) + C = A + (B + C) (Associative)
🌞AB ≠ BA (Not Commutative)
🌞(AB) C = A (BC) (Associative)
🌞A (B+C) = AB + AC (Distributive)
✍️Transpose of Matrix is basically the rearrangement of row elements in column and column elements in a row to yield an equivalent matrix. A matrix in which the elements of the row of the original matrix are arranged in columns or vice versa is called Transpose Matrix. The transpose matrix is represented as AT. if A = [aij]mxn , then AT = [bij]nxm where bij = aji.
Properties of the Transpose of a Matrix
✍️Properties of the transpose of a matrix are mentioned below:
🌞(AT)T = A
🌞(A+B)T = AT + BT
🌞(AB)T = BTAT
✍️Trace of a Matrix is the sum of the principal diagonal elements of a square matrix. Trace of a matrix is only found in the case of a square matrix because diagonal elements exist only in square matrices. Let’s see an example.
Based on the number of rows and columns present and the special characteristics shown, matrices are classified into various types.
🌞Row Matrix: A Matrix in which there is only one row and no column is called Row Matrix.
🌞Column Matrix: A Matrix in which there is only one column and now row is called a Column Matrix.
🌞Horizontal Matrix: A Matrix in which the number of rows is less than the number of columns is called a Horizontal Matrix.
🌞Vertical Matrix: A Matrix in which the number of columns is less than the number of rows is called a Vertical Matrix.
🌞Rectangular Matrix: A Matrix in which the number of rows and columns are unequal is called a Rectangular Matrix.
🌞Square Matrix: A matrix in which the number of rows and columns are the same is called a Square Matrix.
🌞Diagonal Matrix: A square matrix in which the non-diagonal elements are zero is called a Diagonal Matrix.
🌞Zero or Null Matrix: A matrix whose all elements are zero is called a Zero Matrix. A zero matrix is also called as Null Matrix.
🌞Unit or Identity Matrix: A diagonal matrix whose all diagonal elements are 1 is called a Unit Matrix. A unit matrix is also called an Identity matrix. An identity matrix is represented by I.
🌞Symmetric matrix: A square matrix is said to be symmetric if the transpose of the original matrix is equal to its original matrix. i.e. (AT) = A.
🌞Skew-symmetric Matrix: A skew-symmetric (or antisymmetric or antimetric[1]) matrix is a square matrix whose transpose equals its negative i.e. (AT) = -A.
🌞Orthogonal Matrix: A matrix is said to be orthogonal if AAT = ATA = I
🌞Idempotent Matrix: A matrix is said to be idempotent if A2 = A
🌞Involutory Matrix: A matrix is said to be Involutory if A2 = I.
🌞Upper Triangular Matrix: A square matrix in which all the elements below the diagonal are zero is known as the upper triangular matrix
🌞Lower Triangular Matrix: A square matrix in which all the elements above the diagonal are zero is known as the lower triangular matrix
🌞Strictly Triangular Matrix: A triangular matrix is referred to as a strictly triangular matrix if all the elements of the principal diagonal are zero.
🌞Strictly Lower Triangular Matrix: A lower triangular matrix is referred to as a strictly lower triangular matrix if all the elements of the principal diagonal are zero.
🌞Strictly Upper Triangular Matrix: An upper triangular matrix is referred to as a strictly upper triangular matrix if all the elements of the principal diagonal are zero.
🌞Singular Matrix: A square matrix is said to be a singular matrix if its determinant is zero i.e. |A|=0
🌞Nonsingular Matrix: A square matrix is said to be a non-singular matrix if its determinant is non-zero.
Note: Every Square Matrix can uniquely be expressed as the sum of a symmetric matrix and a skew-symmetric matrix. A = 1/2 (AT + A) + 1/2 (A – AT).
✍️Determinant of a matrix is a number associated with that square matrix. The determinant of a matrix can only be calculated for a square matrix. It is represented by |A|. The determinant of a matrix is calculated by adding the product of the elements of a matrix with their cofactors.
✍️Minors and Cofactors are important to calculate the adjoint and inverse of a matrix. As the name suggests, a Minor is a smaller part of the larger matrix obtained for a particular element of the matrix by deleting the terms of the row and column to which the element belongs. A cofactor is (-1)i+j times the minor of the matrix.
✍️They are the backbones of Linear Algebra and are used to find the value of a matrix’s determinant, adjoint, and inverse. Other than that there are many use cases in computer science for minors and cofactors. In this article, we will study minors and cofactors in detail. Other than that, we will also learn about the determinants, matrix inversion, and many more.
✍️Use the following steps to find the minor of any given matrix:
🌞Step 1: Hide the ith row and jth column of the matrix A, where the element aij lies.
🌞Step 2: Now compute the determinant of the matrix after the row and column is removed using step 1.
🌞Step 3: Result of Step 2 is the minor for he elelment of the ith row and jth column, then repeat the process for each element of the matrix to find the minor for all the elements of the matrix.
✍️Co-factor of an element aij of a determinant, denoted by Aij or Cij, and is defined as follows:
Cij = (-1)i+j Mij
Where,
👉Mij is the minor of the element aij, and
👉i and j respectively reprensent the number of row and column of the element (position).
✍️Minors and Cofactors are used in the calculation of the following terms:
👉Adjoint of Matrix
👉Inverse of Matrix
✍️To calculate the adjoint of Matrix, you need to follow the following steps:
🌝Step 1: Calculate the cofactors of each element of a given matrix.
🌝Step 2: Construct the matrix from the cofactor of elements.
🌝Step 3: Calculate the Transpose of the resultant matrix in Step 2.
🌝Step 4: Resulting matrix of Step 3 is the adjoint of the given matrix.
✍️Properties of the Adjoint of a matrix are mentioned below:
✍️A matrix is said to be an inverse of matrix ‘A’ if the matrix is raised to power -1 i.e. A^{-1}. The inverse is only calculated for a square matrix whose determinant is non-zero. The formula for the inverse of a matrix is given as:
A-1 = adj(A)/det(A) = (1/|A|)(adj A), where |A| should not be equal to zero, which means matrix A should be non-singular.
👉(A-1)-1 = A
👉(AB)-1 = B-1A-1
👉only a non-singular square matrix can have an inverse.
✍️Elementary Operations on Matrices are performed to solve the linear equation and to find the inverse of a matrix. Elementary operations are between rows and between columns. There are three types of elementary operations performed for rows and columns. These operations are mentioned below:
✍️Elementary operations on rows include:
👉Interchanging two rows
👉Multiplying a row by a non-zero number
👉Adding two rows
✍️Elementary operations on columns include:
👉Interchanging two columns
👉Multiplying a column by a non-zero number
👉Adding two columns
✍️A matrix formed by combining columns of two matrices is called Augmented Matrix. An augmented matrix is used to perform elementary row operations, solve a linear equation, and find the inverse of a matrix. Let us understand through an example.
✍️Matrices are used to solve linear equations. To solve linear equations we need to make three matrices. The first matrix is of coefficients, the second matrix is of variables and the third matrix is of constants. Let’s understand it through an example.
✍️Rank of Matrix is given by the maximum number of linearly independent rows or columns of a matrix. The rank of a matrix is always less than or equal to the total number of rows or columns present in a matrix. A square matrix has linearly independent rows or columns if the matrix is non-singular i.e. determinant is not equal to zero. Since a zero matrix has no linearly independent rows or columns its rank is zero.
✍️Rank of a matrix can be calculated by converting the matrix into Row-Echelon Form. In row echelon form we try to convert all the elements belonging to a row to be zero using Elementary Opeartion on Row. After the operation, the total number of rows which has at least one non-zero element is the rank of the matrix. The rank of the matrix A is represented by ρ(A).
✍️Eigen Values are the set of scalar associated with the linear equation in matrix form. Eigenvalues are also called characteristic roots of the matrices. The vectors that are formed by using the eigenvalue to tell the direction at that points are called Eigenvectors. Eigenvalues change the magnitude of eigenvectors. Like any vector, Eigenvector doesn’t change with linear transformation.
For a Square Matrix A of order ‘n’ another square matrix (A – λI) is formed of the same order, where I is the Identity Matrix and λ is the eigenvalue. The eigenvalue λ satisfies an equation Av = λv where v is a non-zero vector.
Some basic properties:
✍️The basic properties for the matrices has been discussed below:
✍️Now, we will discuss the following operations on matrices and their properties:
👉Matrices Addition
👉Matrices Subtraction
👉Matrices Multiplication
Python Code for Addition, Subtraction and Multiplication
🌝Implementation of the above approaches:
🌝What is Relation in Mathematics?
🌝Relations Examples
🌝Representation of Relations
🌝Sets and Relations
🌝Types of Relation
🌝Graphing Relations
✍️Relation in Mathematics is defined as the relationship between two sets. If we are given two sets set A and set B and set A has a relation with set B then each value of set A is related to a value of set B through some unique relation. Here, set A is called the domain of the relation, and set B is called the range of the relation.
For example if we are given two sets, Set A = {1, 2, 3, 4} and Set B = {1, 4, 9, 16} then the ordered pair {(1, 1), (2, 4), (3, 9), (4, 16)} represents the relation defined as, R, A: → B {(x, y): y = x2: y ϵ B, x ϵ A}.
✍️Relation is defined as the relation between two different sets of information. Suppose we are given two sets containing two different values then a relation defined such that it connects the value of the first set with the value of the second set is called the relation.
Suppose we are given a set A that contains the name of girls of a class and another set B that contains the height of girls then a relation connects set A with set B. In mathematical terms, we can say that,
"A set of ordered pairs is a relation”
✍️Example of relation in mathematics includes,
Suppose there are two sets X = {4, 36, 49, 50} and Y = {1, -2, -6, -7, 7, 6, 2}. A relation R states that
“(x, y) is in the relation R if x is a square of y” can be represented using ordered pairs,
👉R = {(4, -2), (4, 2), (36, -6), (36, 6), (49, -7), (49, 7)}
Also, the image added below shows two sets A and B, and the relation between them,
Set A = {x, y, z}
Set B = {1, 2, 3}
✍️In mathematics or in set theory we can represent the relation using different techniques and the two important ways to represent the set are,
👉Set Builder Notation
👉Roaster Notation
Let’s study them in detail in the article below,
✍️If are relation between two sets is represented using the logical formula then this type of representation is called the set builder notation.
✍️For example, if we are given two sets set X = {2, 4, 6} and set Y = {4, 8, 12}. Then after observing clearly, we can see that each element of set Y is twice each element of set X the relation between them is,
👉R {(a, b): b is twice of a, a ∈ X, b ∈ Y}
✍️Roaster form is another way of representing a relation. In roaster form, we use ordered pairs to represent the relation.
✍️For example, if we are given two sets set X = {2, 4, 6} and set Y = {4, 8, 12}. Then the relation between set X and set Y is represented using the relation R such that,
👉R = {(2, 4), (4, 8), (6, 12)}
🌝Types of Sets in Mathematics:
👉Singleton Set
👉Empty Set
👉Finite Set
👉Infinite Set
👉Equal Set
👉Equivalent Set
👉Subset
👉Power Set
👉Universal Set
👉Disjoint Sets
🌝Sets and Relations:
✍️A well-defined collection of Objects or items or data is known as a set. The objects or data are known as the element. For Example, the boys in a classroom can be put in one set, all integers from 1 to 100 can become one set, and all prime numbers can be called an Infinite set.
And relations are relationship that connects the values of two sets. So sets and relations are connected to each other. For example, if we are given two sets
👉Set A = {-2, -1, 0, 1, 2}
👉Set B = {2, 3, 4, 5, 6}
Then the relation that connects the two sets, set A and set B,
R = {(-2, 2), (-1, 3), (0, 4), (1, 5), (2, 6)}
✍️Various types of relations defined in mathematics are,
👉Empty Relation
👉Reflexive Relation
👉Symmetric Relation
👉Transitive Relation
👉Equivalence Relation
👉Universal Relation
👉Identity Relation
👉Inverse Relation
✍️A relation R on a set A is called Empty if the set A is an empty set, i.e. any relation where no element of set A is not related to the element of set B then it is called an empty relation. For example, A = {1, 2, 3} and B = {5, 6, 7} where, R = {(x, y) where x + y = 22}, then it is an empty relation.
✍️A relation R on a set A is called reflexive if (a, a) ∈ R holds for every element a∈ A . i.e. if set A = {a, b} then R = {(a, a), (b, b)} is reflexive relation.
For example, A = {2, 3} then the reflexive relation R on A is,
R = {(2, 2), (3, 3)}
✍️A relation R on a set A is called symmetric if (b, a) ∈ R holds when (a, b) ∈ R i.e. The relation R = {(a, b), (b, a)} is a reflexive relation on (a, b)
For example, A = {2, 3} then symmetric relation R on A is,
R = {(2, 3), (3, 2)}
✍️A relation R on a set A is called transitive if (a, b) ∈ R and (b, c) ∈ R then (a, c) ∈ R for all a,b,c ∈ A i.e.
For example, set A = {1, 2, 3} then transitive relation R on A is,
R = {(1, 2), (2, 3), (1, 3)}
✍️A relation is an Equivalence Relation if it is reflexive, symmetric, and transitive. i.e. relation R = {(1, 1), (2, 2), (3, 3), (1, 2), (2, 1), (2, 3), (3, 2), (1, 3), (3, 1)} on set A = {1, 2, 3} is equivalence relation as it is reflexive, symmetric, and transitive.
✍️Universal relation is a relation in which all elements of set are mapped to other element of set then it is called universal relation. For example, A = {4, 8, 12} and B = {1, 2, 3} then universal relation is, R = {(x, y) where x > y}
✍️Identity relation is a relation defined such all elements in a set are related to itself. It is defined as, I = {(x, x) : for all x ∈ X}.
For example P = {1, 2, 3} then Identity Relation(I) = {(1, 1), (2, 2), (3, 3)}
✍️A relation is called the inverse of any relation if elements of one set is inverse pair of another set. Inverse of a relation R is denoted as R-1. i.e., R-1 = {(y, x): (x, y) ∈ R}.
✍️Relations can be easily represents on the graphs and representing them on graphs is an easy way of explaining them. The ordered pair in a relation represents a coordinate that can be plot on cartesian coordinate system. We can easily graph the relation by following the steps added below,
👉Substitue x with random numerical values in the relation.
👉Find the corresponding y value of the respective x value.
👉Write the ordered pair such that, {(x, y)}
👉Plot these points and join them to find the required curve.
✍️The graph of the relation y = x2 is added below,
Q1. Check wether the relation R = {(a, b), (b, c), (c, a), (a, a)} is Equivalence relation on set A = {a, b, c}
Q2. Check wether the relation R = {(1, 1), (1, 3), (1, 1), (2, 2)} is Equivalence relation on set A = {1, 2, 3}
Q3. Find the inverse of the relation R = {(1, 1), (2, 4), (3, 9)}
Q4. Find the inverse of the relation R = {(2, 6), (3, 7), (5, 9)}
🌝Combining Relation:
🌝Complementary Relation:
🌝Representation of Relations and its Properties:
🌝Example:
✍️Determinant of a Matrix is defined as the function that gives the unique output (real number) for every input value of the matrix. Determinant of the matrix is considered the scaling factor that is used for the transformation of a matrix. It is useful for finding the solution of a system of linear equations, the inverse of the square matrix, and others. The determinant of only square matrices exists.
🌝Content:
👉Definition of Determinant of Matrix
👉Determinant of a 1×1 Matrix
👉Determinant of 2×2 Matrix
👉Determinant of a 3×3 Matrix
👉Determinant of 4×4 Matrix
👉Determinant of Identity Matrix
👉Determinant of Symmetric Matrix
👉Determinant of Skew-Symmetric Matrix
👉Determinant of Inverse Matrix
👉Determinant of Orthogonal Matrix
👉Physical Significance of Determinant
👉Laplace Formula for Determinant
👉Properties of Determinants of Matrix
Determinant of a Matrix is defined as the sum of products of the elements of any row or column along with their corresponding co-factors. Determinant is defined only for square matrices.
Determinant of any square matrix of order 2×2, 3×3, 4×4, or n × n, where n is the number of rows or the number of columns. (For a square matrix number of rows and columns are equal). Determinant can also be defined as the function which maps every matrix with the real numbers.
For any set S of all square matrices, and R the set of all numbers the function f, f: S → R is defined as f (x) = y, where x ∈ S and y ∈ R, then f (x) is called the determinant of the input matrix.
Let’s take any square matrix A, then the determinant of A is denoted as det A (or) |A|. Determinant is also denoted by the symbol Δ.
🌝Determinant of a 3×3 Matrix:
🌝Determinant of 4×4 Matrix:
Determining the determinant of a 4×4 matrix involves more complex methods such as expansion by minors or Gaussian elimination. These techniques require breaking down the matrix into smaller submatrices and recursively finding their determinants. While there isn’t a direct formula like Sarrus’ Rule for 3×3 matrices, the process involves systematic calculations based on the properties of determinants.
How to Find Determinant of 4×4 Matrix:
A symmetric matrix is a square matrix that is equal to its transpose. In other words, if A is a symmetric matrix, then A = AT. Symmetric matrices have several interesting properties, one of which is that their determinants remain unchanged under transpose.
Hence, for a symmetric matrix A, we have: det(A)=det(A^T)
This property simplifies the computation of determinants for symmetric matrices since you can work with either the original matrix or its transpose, whichever is more convenient.
A skew-symmetric (or antisymmetric) matrix is a square matrix whose transpose is equal to its negative. In other words, if A is a skew-symmetric matrix, then A = −AT. Skew-symmetric matrices have interesting properties, one of which is that their determinants have specific values based on the order of the matrix.
For skew-symmetric matrices of odd order, the determinant is always 0. This is because the determinant of a skew-symmetric matrix is always the square of its eigenvalues, and a non-zero square is always positive. Since the order of the matrix is odd, at least one eigenvalue must be zero, resulting in a determinant of 0.
For skew-symmetric matrices of even order, the determinant is a non-zero value, which can be calculated based on the elements of the matrix. However, determining the exact value typically involves more complex methods such as cofactor expansion or using properties of determinants.
✍️To understand the determinant of the inverse matrix, let’s first define the inverse of a matrix:
The inverse of a square matrix A, denoted as A^{-1}, is a matrix such that when it’s multiplied by A, the result is the identity matrix I. Mathematically, if A*A^{−1}=A^{-1}*A = I, then A^{−1} is the inverse of A.
Now, the determinant of the inverse matrix, denoted as det(A^{−1}), is related to the determinant of the original matrix A. Specifically, it can be expressed by the formula:
det(A^{-1})=1/det(A)
This formula illustrates an important relationship between the determinants of a matrix and its inverse. If the determinant of A is non-zero, meaning det(A)≠0, then the inverse matrix exists, and its determinant is the reciprocal of the determinant of A. Conversely, if det(A)=0, the matrix A is said to be singular, and it does not have an inverse.
✍️Here are some key points about the determinant of the inverse matrix:
Non-Singular Matrices: For non-singular matrices (those with non-zero determinants), their inverses exist, and the determinant of the inverse is the reciprocal of the determinant of the original matrix.
Singular Matrices: Singular matrices (those with zero determinants) do not have inverses. Attempting to find the inverse of a singular matrix results in an undefined or non-existent inverse.
Geometric Interpretation: The determinant of a matrix measures how it scales the space. Similarly, the determinant of the inverse matrix measures the scaling effect of the inverse transformation. If the original transformation expands the space, its inverse contraction will be inversely proportional, and vice versa.
An orthogonal matrix is a square matrix whose rows and columns are orthonormal vectors, meaning that the dot product of any two distinct rows or columns equals zero, and the dot product of each row or column with itself equals one. Mathematically, if A is an orthogonal matrix, then AT⋅A=I, where AT denotes the transpose of A and I represents the identity matrix.
The determinant of an orthogonal matrix has a special property: det(A)=+1 or -1
The determinant of an orthogonal matrix is either +1+1 or −1−1. This property arises from the fact that the determinant represents the scaling factor of the matrix transformation. Since orthogonal transformations preserve lengths, the determinant must be either positive (for preserving orientation) or negative (for reversing orientation).
The determinant of an orthogonal matrix being +1,+1 implies that the transformation preserves orientation, while a determinant of −1,−1 indicates a transformation that reverses orientation.
Consider a 2D matrix, each column of this matrix can be considered as a vector on the x-y plane. So, the determinant between two vectors on a 2d plane gives us the area enclosed between them. If we extend this concept, in 3D the determinant will give us the volume enclosed between two vectors.
✍️Various Properties of the Determinants of the square matrix are discussed below:
☘️Reflection Property: Value of the determinant remains unchanged even after rows and columns are interchanged. That determinant of a matrix and its transpose remains the same.
☘️Switching Property: If any two rows or columns of a determinant are interchanged, then the sign of the determinant changes.
☘️Scalar Multiplication Property: If each element in a row or column of a matrix A is multiplied by a scalar k, then the determinant of the resulting matrix is k times the determinant of A. Mathematically, if B is the matrix obtained by multiplying each element of a row or column of A by k, then det(B) = k⋅det(A).
☘️Additivity Property: The determinant of the sum of two matrices A and B is equal to the sum of their determinants. Symbolically, det(A+B) = det(A) + det(B). However, this property applies only if the matrices have the same dimensions.
☘️Multiplicative Property: The determinant of the product of two matrices A and B is equal to the product of their determinants. Symbolically, det(AB) = det(A)⋅det(B). However, this property holds true only for square matrices.
☘️Determinant of Transpose: The determinant of a matrix A is equal to the determinant of its transpose AT. Mathematically, det(A) = det(AT).
☘️Repetition Property/Proportionality Property: If any two rows or any two columns of a determinant are identical, then the value of the determinant becomes zero.
☘️Scalar Multiple Property: If each element of a row (or a column) of a determinant is multiplied by a constant k, then its value gets multiplied by k
☘️Sum Property: If some or all elements of a row or column can be expressed as the sum of two or more terms, then the determinant can also be expressed as the sum of two or more determinants.