First, some clarifications. We are considering linear maps from a finitedimensional vector space to itself (also called an operator L(V)), as opposed to linear maps from one vector space to another vector space.
Suppose V is a finitedimensional vector space, T ∈ L(V), and λ ∈ F.
Some definitions to know:

Invariant subspace: U is invariant under T if T_{U} is an operator on U.

λ ∈ F is an eigenvalue of T if ∃ v ∈ V s.t. v ≠ 0 and Tv = λv.

The eigenspace of T corresponding to λ, E(λ, T) = null(T  λI) (the set of all eigenvectors of T corresponding to λ, along with the 0 vector.)

Concepts from a previous post
Basics

A linear map is invertible if and only if it is injective and surjective.

For operators on a finitedimensional vector space, however, either injectivity or surjectivity alone implies the other condition, and thus implies invertibility. (proved using Rank theorem)

If λ is an eigenvalue of T, then by definition v ∈ ker(T  λI) so ker(T  λI) ≠ 0. Hence, T  λI is not injective (or surjective). That is, T  λI is not invertible, det(T  λI) = 0.

Eigenvectors corresponding to distinct eigenvalues are linearly independent.

Each operator on V has at most dim V distinct eigenvalues.

If A is an n ×n matrix, then the sum of the n eigenvalues of A is the trace of A and the product of the n eigenvalues is the determinant of A.

If λ is an eigenvalue of the T, λ^{2} is an eigenvalue of T^{2}.

n × n matrix A and its transpose A^{T} have the same eigenvalues.

Suppose A and B are similar matrices. Then A and B have the same characteristic polynomial and hence the same eigenvalue.

If λ is an eigenvalue of A, then the dimension of E_{λ} is at most the multiplicity of λ.

If λ^{*} is an eigenvalue of A, then the multiplicity of λ^{*} is at least the dimension of the eigenspace E_{λ*} .
Intermediate

Any two polynomials of an operator commute.

Every operator on a finitedimensional, nonzero, complex vector space has an eigenvalue.

Over C, every operator has an uppertriangular matrix

Suppose T ∈ L(V) has an uppertriangular matrix with respect to some basis of V, then T is invertible if and only if all the entries on the diagonal of that uppertriangular matrix are nonzero;

Suppose T ∈ L(V) has an uppertriangular matrix with respect to some basis of V, then the eigenvalues of T are precisely the entries on the diagonal of that uppertriangular matrix.

Let λ_{1}, …, λ_{m} denote the distinct eigenvalues of T. Then, T is diagonalizable
<=> V has a basis consisting of eigenvectors of T
<=> ∃ 1dimensional subspaces U_{1}, …, U_{n} of V, each invariant under T, such that V = U_{1} ⊕ … ⊕ U_{n}
<=> V = E(λ_{1}, T) ⊕ … ⊕ E(λ_{m}, T)
<=> dim V = dim E(λ_{1}, T) + … + dim E(λ_{m}, T)
 If T has dim V (enough) distinct eigenvalues, then T is diagonalizable. (The converse is not true, as the diagonalizable identity operator only has only 1 eigenvalue λ = 1)