Skip to Content
Math401Math 401, Summer 2025: Freiwald research project notesMath 401, Topic 2: Finite-dimensional Hilbert spaces

Math401 Topic 2: Finite-dimensional Hilbert spaces

Recall the complex number is a tuple of two real numbers, z=(a,b)z=(a,b) with addition and multiplication defined by

(a,b)+(c,d)=(a+c,b+d)(a,b)+(c,d)=(a+c,b+d) (a,b)(c,d)=(acbd,ad+bc)(a,b)\cdot(c,d)=(ac-bd,ad+bc)

or in polar form,

z=reiθ=r(cosθ+isinθ)z=re^{i\theta}=r(\cos\theta+i\sin\theta)

where r=a2+b2=zzr=\sqrt{a^2+b^2}=\sqrt{z\overline{z}} and θ=tan1(b/a)\theta=\tan^{-1}(b/a).

The complex conjugate of zz is z=(a,b)\overline{z}=(a,-b).

Section 1: Finite-dimensional Complex Vector Spaces

Here, we use the field C\mathbb{C} of complex numbers. or the field R\mathbb{R} of real numbers as the field F\mathbb{F} we are going to encounter.

Definition of vector space

A vector space V\mathscr{V} over a field F\mathbb{F} is a set equipped with an addition and a scalar multiplication, satisfying the following axioms:

  1. Addition is associative and commutative. For all u,v,wVu,v,w\in \mathscr{V},

Associativity:

(u+v)+w=u+(v+w)(u+v)+w=u+(v+w)

Commutativity:

u+v=v+uu+v=v+u
  1. Additive identity: There exists an element 0V0\in \mathscr{V} such that v+0=vv+0=v for all vVv\in \mathscr{V}.

  2. Additive inverse: For each vVv\in \mathscr{V}, there exists an element vV-v\in \mathscr{V} such that v+(v)=0v+(-v)=0.

  3. Multiplicative identity: There exists an element 1F1\in \mathbb{F} such that v1=vv\cdot 1=v for all vVv\in \mathscr{V}.

  4. Multiplicative inverse: For each vVv\in \mathscr{V} and cFc\in \mathbb{F}, there exists an element c1Fc^{-1}\in \mathbb{F} such that vc1=1v\cdot c^{-1}=1.

  5. Distributivity: For all u,vVu,v\in \mathscr{V} and c,dFc,d\in \mathbb{F},

c(u+v)=cu+cvc(u+v)=cu+cv

A vector is an ordered pair of elements over the field F\mathbb{F}.

If we consider F=Cn\mathbb{F}=\mathbb{C}^n, nNn\in \mathbb{N}, then u=(a1,a2,,an),v=(b1,b2,,bn)Cnu=(a_1,a_2,\cdots,a_n), v=(b_1,b_2,\cdots,b_n)\in \mathbb{C}^n are vectors.

The addition and scalar multiplication are defined by

u+v=(a1+b1,a2+b2,,an+bn)u+v=(a_1+b_1,a_2+b_2,\cdots,a_n+b_n) cu=(ca1,ca2,,can)cu=(ca_1,ca_2,\cdots,ca_n)

cCc\in \mathbb{C}.

The matrix transpose is defined by

u=(a1,a2,,an)=(a1a2an)u^\top=(a_1,a_2,\cdots,a_n)^\top=\begin{pmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix}

The complex conjugate transpose is defined by

u=(a1,a2,,an)=(a1a2an)u^*=(a_1,a_2,\cdots,a_n)^*=\begin{pmatrix} \overline{a_1} \\ \overline{a_2} \\ \vdots \\ \overline{a_n} \end{pmatrix}

In physics, the complex conjugate is sometimes denoted by zz^* instead of z\overline{z}. The complex conjugate transpose is sometimes denoted by uu^\dagger instead of uu^*.

Hermitian inner product and norms

On Cn\mathbb{C}^n, the Hermitian inner product is defined by

u,v=i=1nuivi\langle u,v\rangle=\sum_{i=1}^n \overline{u_i}v_i

The norm is defined by

u=u,u\|u\|=\sqrt{\langle u,u\rangle}

Definition of Inner product

Let H\mathscr{H} be a complex vector space. An inner product on H\mathscr{H} is a function ,:H×HC\langle \cdot, \cdot \rangle: \mathscr{H}\times \mathscr{H}\to \mathbb{C} satisfying the following axioms:

  1. For each uHu\in \mathscr{H}, vu,vv\mapsto \langle u,v\rangle is a linear map.
u,av+bw=au,v+bu,w\langle u,av+bw\rangle=a\langle u,v\rangle+b\langle u,w\rangle

For all u,v,wHu,v,w\in \mathscr{H} and a,bCa,b\in \mathbb{C}.

  1. For all u,vHu,v\in \mathscr{H}, u,v=v,u\langle u,v\rangle=\overline{\langle v,u\rangle}.

uu,vu\mapsto \langle u,v\rangle is a conjugate linear map.

  1. u,u0\langle u,u\rangle\geq 0 and u,u=0\langle u,u\rangle=0 if and only if u=0u=0.

Definition of norm

Let H\mathscr{H} be a complex vector space. A norm on H\mathscr{H} is a function :HR\|\cdot\|: \mathscr{H}\to \mathbb{R} satisfying the following axioms:

  1. For all uHu\in \mathscr{H}, u0\|u\|\geq 0 and u=0\|u\|=0 if and only if u=0u=0.

  2. For all uHu\in \mathscr{H} and cCc\in \mathbb{C}, cu=cu\|cu\|=|c|\|u\|.

  3. Triangle inequality: For all u,vHu,v\in \mathscr{H}, u+vu+v\|u+v\|\leq \|u\|+\|v\|.

Definition of inner product space

A complex vector space H\mathscr{H} with an inner product is called a Hilbert space.

Cauchy-Schwarz inequality

For all u,vHu,v\in \mathscr{H},

u,vuv|\langle u,v\rangle|\leq \|u\|\|v\|

Parallelogram law

For all u,vHu,v\in \mathscr{H},

u+v2+uv2=2(u2+v2)\|u+v\|^2+\|u-v\|^2=2(\|u\|^2+\|v\|^2)

Polarization identity

For all u,vHu,v\in \mathscr{H},

u,v=14(u+v2uv2+iu+iv2iuiv2)\langle u,v\rangle=\frac{1}{4}(\|u+v\|^2-\|u-v\|^2+i\|u+iv\|^2-i\|u-iv\|^2)

Additional definitions

Let u,vHu,v\in \mathscr{H}.

v\|v\| is the length of vv.

vv is a unit vector if v=1\|v\|=1.

u,vu,v are orthogonal if u,v=0\langle u,v\rangle=0.

Definition of orthonormal basis

A set of vectors {e1,e2,,en}\{e_1,e_2,\cdots,e_n\} in a Hilbert space H\mathscr{H} is called an orthonormal basis if

  1. ei,ej=δij\langle e_i,e_j\rangle=\delta_{ij} for all i,j{1,2,,n}i,j\in \{1,2,\cdots,n\}.
δij={1if i=j0if ij\delta_{ij}=\begin{cases} 1 & \text{if } i=j \\ 0 & \text{if } i\neq j \end{cases}
  1. n=dimHn=\dim \mathscr{H}.

Subspaces and orthonormal bases

Definition of subspace

A subset W\mathscr{W} of a vector space V\mathscr{V} is a subspace if it is closed under addition and scalar multiplication.

Definition of orthogonal complement

Let EE be a subset of a Hilbert space H\mathscr{H}. The orthogonal complement of EE is the set of all vectors in H\mathscr{H} that are orthogonal to every vector in EE.

E={vH:v,w=0 for all wE}E^\perp=\{v\in \mathscr{H}: \langle v,w\rangle=0 \text{ for all } w\in E\}

Definition of orthogonal projection

Let EE be a mm-dimensional subspace of a Hilbert space H\mathscr{H}. An orthogonal projection of EE is a linear map PE:HEP_E: \mathscr{H}\to E

PE(v)=i=1mv,eieiP_E(v)=\sum_{i=1}^m \langle v,e_i\rangle e_i

Definition of orthonormal direct sum

A inner product space H\mathscr{H} is the direct sum of E1,E2,,EnE_1,E_2,\cdots,E_n if

H=E1E2En\mathscr{H}=E_1\oplus E_2\oplus \cdots \oplus E_n

and EiEj={0}E_i\cap E_j=\{0\} for all iji\neq j.

That is, vH\forall v\in \mathscr{H}, there exists a unique viEiv_i\in E_i such that v=v1+v2++vnv=v_1+v_2+\cdots+v_n.

Definition of meet and join of subspaces

Let EE and FF be two subspaces of a Hilbert space H\mathscr{H}. The meet of EE and FF is the subspace H\mathscr{H} such that

EF=EFE\land F=E\cap F

The join of EE and FF is the subspace H\mathscr{H} such that

EF={u+v:uE,vF}E\lor F=\{u+v: u\in E, v\in F\}

Null space and range

Definition of null space

Let AA be a linear map from a vector space V\mathscr{V} to a vector space W\mathscr{W}. The null space of AA is the set of all vectors in V\mathscr{V} that are mapped to the zero vector in W\mathscr{W}.

Null(A)={vV:Av=0}\text{Null}(A)=\{v\in \mathscr{V}: Av=0\}

Definition of range

Let AA be a linear map from a vector space V\mathscr{V} to a vector space W\mathscr{W}. The range of AA is the set of all vectors in W\mathscr{W} that are mapped from V\mathscr{V}.

Range(A)={wW:vV,Av=w}\text{Range}(A)=\{w\in \mathscr{W}: \exists v\in \mathscr{V}, Av=w\}

Dual spaces and adjoints of linear maps

Definition of linear map

A linear map T:VWT: \mathscr{V}\to \mathscr{W} is a function that satisfies the following axioms:

  1. Additivity: For all u,vVu,v\in \mathscr{V} and a,bFa,b\in \mathbb{F},
T(au+bv)=aT(u)+bT(v)T(au+bv)=aT(u)+bT(v)
  1. Homogeneity: For all uVu\in \mathscr{V} and aFa\in \mathbb{F},
T(au)=aT(u)T(au)=aT(u)

Definition of linear functionals

A linear functional f:VFf: \mathscr{V}\to \mathbb{F} is a linear map from V\mathscr{V} to F\mathbb{F}.

Here, F\mathbb{F} is the field of complex numbers.

Definition of dual space

Let V\mathscr{V} be a vector space over a field F\mathbb{F}. The dual space of V\mathscr{V} is the set of all linear functionals on V\mathscr{V}.

V={f:VF:f is linear}\mathscr{V}^*=\{f:\mathscr{V}\to \mathbb{F}: f\text{ is linear}\}

If H\mathscr{H} is a finite-dimensional Hilbert space, then H\mathscr{H}^* is isomorphic to H\mathscr{H}.

Note vHv,Hv\in \mathscr{H}\mapsto \langle v,\cdot\rangle\in \mathscr{H}^* is a conjugate linear isomorphism.

Definition of adjoint of a linear map

Let T:VWT: \mathscr{V}\to \mathscr{W} be a linear map. The adjoint of TT is the linear map T:WVT^*: \mathscr{W}\to \mathscr{V} such that

Tv,w=v,Tw\langle Tv,w\rangle=\langle v,T^*w\rangle

for all vVv\in \mathscr{V} and wWw\in \mathscr{W}.

Definition of self-adjoint operators

A linear operator T:VVT: \mathscr{V}\to \mathscr{V} is self-adjoint if T=TT^*=T.

Definition of unitary operators

A linear map T:VVT: \mathscr{V}\to \mathscr{V} is unitary if TT=TT=IT^*T=TT^*=I.

Dirac’s bra-ket notation

Definition of bra and ket

Let H\mathscr{H} be a Hilbert space. The bra-ket notation is a notation for vectors in H\mathscr{H}.

vw\langle v|w\rangle

is the inner product of vv and ww. That is, vw:HC\langle v|w\rangle: \mathscr{H}\to \mathbb{C} is a linear functional satisfying the property of inner product.

v|v\rangle

is the vector (or linear map) vv.

uv|u\rangle\langle v|

is a linear map from H\mathscr{H} to H\mathscr{H}.

The spectral theorem for self-adjoint operators

Spectral theorem for self-adjoint operators

Definition of spectral theorem

Let H\mathscr{H} be a Hilbert space. A self-adjoint operator T:HHT: \mathscr{H}\to \mathscr{H} is a linear operator that is equal to its adjoint.

Then all the eigenvalues of TT are real and there exists an orthonormal basis of H\mathscr{H} consisting of eigenvectors of TT.

Definition of spectrum

The spectrum of a linear operator on finite-dimensional Hilbert space T:HHT: \mathscr{H}\to \mathscr{H} is the set of all distinct eigenvalues of TT.

sp(T)={λ:λ is an eigenvalue of T}C\operatorname{sp}(T)=\{\lambda: \lambda\text{ is an eigenvalue of } T\}\subset \mathbb{C}

Definition of Eigenspace

If λ\lambda is an eigenvalue of TT, the eigenspace of TT corresponding to λ\lambda is the set of all eigenvectors of TT corresponding to λ\lambda.

Eλ(T)={vH:Tv=λv}E_\lambda(T)=\{v\in \mathscr{H}: Tv=\lambda v\}

We denote Pλ(T):HEλ(T)P_\lambda(T):\mathscr{H}\to E_\lambda(T) the orthogonal projection onto Eλ(T)E_\lambda(T).

Definition of Operator norm

The operator norm of a linear operator T:HHT: \mathscr{H}\to \mathscr{H} is the largest eigenvalue of TT.

T=maxv=1Tv\|T\|=\max_{\|v\|=1} \|Tv\|

We say TT is bounded if T<\|T\|<\infty.

We denote B(H)B(\mathscr{H}) the set of all bounded linear operators on H\mathscr{H}.

Partial trace

Definition of trace

Let TT be a linear operator on H\mathscr{H}, (e1,e2,,en)(e_1,e_2,\cdots,e_n) be a basis of H\mathscr{H} and (ϵ1,ϵ2,,ϵn)(\epsilon_1,\epsilon_2,\cdots,\epsilon_n) be a basis of dual space H\mathscr{H}^*. Then the trace of TT is defined by

Tr(T)=i=1nϵi(T(ei))=i=1nei,T(ei)\operatorname{Tr}(T)=\sum_{i=1}^n \epsilon_i(T(e_i))=\sum_{i=1}^n \langle e_i,T(e_i)\rangle

This is equivalent to the sum of the diagonal elements of TT.

Note, I changed the order of the definitions for the trace to pack similar concepts together. Check the rest of the section defining the partial trace by viewing the tensor product section  first, and return to this section after reading the tensor product of linear operators.

Definition of partial trace

Let TT be a linear operator on H=AB\mathscr{H}=\mathscr{A}\otimes \mathscr{B}, where A\mathscr{A} and B\mathscr{B} are finite-dimensional Hilbert spaces.

An operator TT on H=AB\mathscr{H}=\mathscr{A}\otimes \mathscr{B} can be written as (by the definition of tensor product of linear operators )

T=i=1naiAiBiT=\sum_{i=1}^n a_i A_i\otimes B_i

where AiA_i is a linear operator on A\mathscr{A} and BiB_i is a linear operator on B\mathscr{B}.

The B\mathscr{B}-partial trace of TT (TrB(T):L(AB)L(A)\operatorname{Tr}_{\mathscr{B}}(T):\mathcal{L}(\mathscr{A}\otimes \mathscr{B})\to \mathcal{L}(\mathscr{A})) is the linear operator on A\mathscr{A} defined by

TrB(T)=i=1naiTr(Bi)Ai\operatorname{Tr}_{\mathscr{B}}(T)=\sum_{i=1}^n a_i \operatorname{Tr}(B_i) A_i

Or we can define the map Lv:AABL_v: \mathscr{A}\to \mathscr{A}\otimes \mathscr{B} by

Lv(u)=uvL_v(u)=u\otimes v

Note that u,Lv(u)v=u,uv,v=uv,uv=Lv(u),uv\langle u,L_v^*(u')\otimes v'\rangle=\langle u,u'\rangle \langle v,v'\rangle=\langle u\otimes v,u'\otimes v'\rangle=\langle L_v(u),u'\otimes v'\rangle.

Therefore, Lvjujvj=jv,vjujL_v^*\sum_{j} u_j\otimes v_j=\sum_{j} \langle v,v_j\rangle u_j.

Then the partial trace of TT can also be defined by

Let {vj}\{v_j\} be a set of orthonormal basis of B\mathscr{B}.

TrB(T)=jLvj(T)Lvj\operatorname{Tr}_{\mathscr{B}}(T)=\sum_{j} L^*_{v_j}(T)L_{v_j}

Definition of partial trace with respect to a state

Let TT be a linear operator on H=AB\mathscr{H}=\mathscr{A}\otimes \mathscr{B}, where A\mathscr{A} and B\mathscr{B} are finite-dimensional Hilbert spaces.

Let ρ\rho be a state on B\mathscr{B} consisting of orthonormal basis {vj}\{v_j\} and eigenvalue {λj}\{\lambda_j\}.

The partial trace of TT with respect to ρ\rho is the linear operator on A\mathscr{A} defined by

TrA(T)=jλjLvj(T)Lvj\operatorname{Tr}_{\mathscr{A}}(T)=\sum_{j} \lambda_j L^*_{v_j}(T)L_{v_j}

Space of Bounded Linear Operators

Recall the trace of a matrix is the sum of its diagonal elements.

Hilbert-Schmidt inner product

Let T,SB(H)T,S\in B(\mathscr{H}). The Hilbert-Schmidt inner product of TT and SS is defined by

T,S=Tr(TS)\langle T,S\rangle=\operatorname{Tr}(T^*S)

Note here, TT^* is the complex conjugate transpose of TT.

If we introduce the basis {ei}\{e_i\} in H\mathscr{H}, then we can write the the space of bounded linear operators as n×nn\times n complex-valued matrices Mn(C)M_n(\mathbb{C}).

For T=(aij)T=(a_{ij}), S=(bij)S=(b_{ij}), we have

Tr(AB)=i=1nj=1naijbij\operatorname{Tr}(A^*B)=\sum_{i=1}^n \sum_{j=1}^n \overline{a_{ij}}b_{ij}

The inner product is the standard Hermitian inner product in Cn×n\mathbb{C}^{n\times n}.

Definition of Hilbert-Schmidt norm (also called Frobenius norm)

The Hilbert-Schmidt norm of a linear operator T:HHT: \mathscr{H}\to \mathscr{H} is defined by

T=i=1nj=1naij2\|T\|=\sqrt{\sum_{i=1}^n \sum_{j=1}^n |a_{ij}|^2}

The trace of operator does not depend on the basis. 

Tensor products of finite-dimensional Hilbert spaces

Let X=X1×X2××XnX=X_1\times X_2\times \cdots \times X_n be a Cartesian product of nn sets.

Let x=(x1,x2,,xn)x=(x_1,x_2,\cdots,x_n) be a vector in XX. xjXjx_j\in X_j for j=1,2,,nj=1,2,\cdots,n.

Let aXja\in X_j for j=1,2,,nj=1,2,\cdots,n.

Let’s denote the space of all functions from XX to C\mathbb{C} by H\mathscr{H} and the space of all functions from XjX_j to C\mathbb{C} by Hj\mathscr{H}_j.

ϵa(j)(xj)={1if xj=a0if xja\epsilon_{a}^{(j)}(x_j)=\begin{cases} 1 & \text{if } x_j=a \\ 0 & \text{if } x_j\neq a \end{cases}

Then we can define a basis of Hj\mathscr{H}_j by {ϵa(j)(xj)}aXj\{\epsilon_{a}^{(j)}(x_j)\}_{a\in X_j}.

Any function f:XjCf:X_j\to \mathbb{C} can be written as a linear combination of the basis vectors.

f(xj)=aXjf(a)ϵa(j)(xj)f(x_j)=\sum_{a\in X_j} f(a)\epsilon_{a}^{(j)}(x_j)

Proof

Note that a function is a map for all elements in the domain.

For each aXja\in X_j, ϵa(j)(xj)=1\epsilon_{a}^{(j)}(x_j)=1 if xj=ax_j=a and 00 otherwise. So

f(xj)=aXjf(a)ϵa(j)(xj)=f(xj)f(x_j)=\sum_{a\in X_j} f(a)\epsilon_{a}^{(j)}(x_j)=f(x_j)

Now, let a=(a1,a2,,an)a=(a_1,a_2,\cdots,a_n) be a vector in XX, and x=(x1,x2,,xn)x=(x_1,x_2,\cdots,x_n) be a vector in XX. Note that aj,xjXja_j,x_j\in X_j for j=1,2,,nj=1,2,\cdots,n.

Define

ϵa(x)=j=1nϵaj(j)(xj)={1if aj=xj for all j=1,2,,n0otherwise\epsilon_a(x)=\prod_{j=1}^n \epsilon_{a_j}^{(j)}(x_j)=\begin{cases} 1 & \text{if } a_j=x_j \text{ for all } j=1,2,\cdots,n \\ 0 & \text{otherwise} \end{cases}

Then we can define a basis of H\mathscr{H} by {ϵa}aX\{\epsilon_a\}_{a\in X}.

Any function f:XCf:X\to \mathbb{C} can be written as a linear combination of the basis vectors.

f(x)=aXf(a)ϵa(x)f(x)=\sum_{a\in X} f(a)\epsilon_a(x)

Proof

This basically follows the same rascal as the previous proof. This time, the epsilon function only returns 11 when xj=ajx_j=a_j for all j=1,2,,nj=1,2,\cdots,n.

f(x)=aXf(a)ϵa(x)=f(x)f(x)=\sum_{a\in X} f(a)\epsilon_a(x)=f(x)

Definition of tensor product of basis elements

The tensor product of basis elements is defined by

ϵaϵa1(1)ϵa2(2)ϵan(n)\epsilon_a\coloneqq\epsilon_{a_1}^{(1)}\otimes \epsilon_{a_2}^{(2)}\otimes \cdots \otimes \epsilon_{a_n}^{(n)}

This is a basis of H\mathscr{H}, here H\mathscr{H} is the set of all functions from X=X1×X2××XnX=X_1\times X_2\times \cdots \times X_n to C\mathbb{C}.

Definition of tensor product of two finite-dimensional Hilbert spaces

The tensor product of two finite-dimensional Hilbert spaces (in H\mathscr{H}) is defined by

Let H1\mathscr{H}_1 and H2\mathscr{H}_2 be two finite dimensional Hilbert spaces. Let u1H1u_1\in \mathscr{H}_1 and v1H2v_1\in \mathscr{H}_2.

u1v1u_1\otimes v_1

is a bi-anti-linear map from H1×H2\mathscr{H}_1\times \mathscr{H}_2 (the Cartesian product of H1\mathscr{H}_1 and H2\mathscr{H}_2, a tuple of two elements where first element is in H1\mathscr{H}_1 and second element is in H2\mathscr{H}_2) to F\mathbb{F} (in this case, C\mathbb{C}). And uH1,vH2\forall u\in \mathscr{H}_1, v\in \mathscr{H}_2,

(u1v1)(u,v)=u,u1v,v1(u_1\otimes v_1)(u, v)=\langle u,u_1\rangle \langle v,v_1\rangle

We call such forms decomposable. The tensor product of two finite-dimensional Hilbert spaces, denoted by H1H2\mathscr{H}_1\otimes \mathscr{H}_2, is the set of all linear combinations of decomposable forms. Represented by the following:

(i=1naiuivi)(u,v)i=1naj(ujvj)(u,v)=i=1naiv,uivi,u\left(\sum_{i=1}^n a_i u_i\otimes v_i\right)(u, v) \coloneqq \sum_{i=1}^n a_j(u_j\otimes v_j)(u,v)=\sum_{i=1}^n a_i \langle v,u_i\rangle \langle v_i,u\rangle

Note that aiCa_i\in \mathbb{C} for complex-vector spaces.

This is a linear space of dimension dimH1×dimH2\dim \mathscr{H}_1\times \dim \mathscr{H}_2.

We define the inner product of two elements of H1H2\mathscr{H}_1\otimes \mathscr{H}_2 (u1v1:(H1H2)Cu_1\otimes v_1:(\mathscr{H}_1\otimes \mathscr{H}_2)\to \mathbb{C}, u2v2:(H1H2)Cu_2\otimes v_2:(\mathscr{H}_1\otimes \mathscr{H}_2)\to \mathbb{C} H1H2\in \mathscr{H}_1\otimes \mathscr{H}_2) by

u1v1,u2v2u1,u2v1,v2=(u1v1)(u2,v2)\langle u_1\otimes v_1, u_2\otimes v_2\rangle\coloneqq\langle u_1,u_2\rangle \langle v_1,v_2\rangle=(u_1\otimes v_1)(u_2,v_2)

Tensor products of linear operators

Let T1T_1 be a linear operator on H1\mathscr{H}_1 and T2T_2 be a linear operator on H2\mathscr{H}_2, where H1\mathscr{H}_1 and H2\mathscr{H}_2 are finite-dimensional Hilbert spaces. The tensor product of T1T_1 and T2T_2 (denoted by T1T2T_1\otimes T_2) on H1H2\mathscr{H}_1\otimes \mathscr{H}_2, such that on decomposable elements is defined by

(T1T2)(v1v2)=T1(v1)T2(v2)=v1,T1(v1)v2,T2(v2)(T_1\otimes T_2)(v_1\otimes v_2)=T_1(v_1)\otimes T_2(v_2)=\langle v_1,T_1(v_1)\rangle \langle v_2,T_2(v_2)\rangle

for all v1H1v_1\in \mathscr{H}_1 and v2H2v_2\in \mathscr{H}_2.

The tensor product of two linear operators T1T_1 and T2T_2 is a linear combination in the form as follows:

i=1naiT1(ui)T2(vi)\sum_{i=1}^n a_i T_1(u_i)\otimes T_2(v_i)

for all uiH1u_i\in \mathscr{H}_1 and viH2v_i\in \mathscr{H}_2.

Such tensor product of linear operators is well defined.

Proof

If i=1naiuivi=j=1mbjujvj\sum_{i=1}^n a_i u_i\otimes v_i=\sum_{j=1}^m b_j u_j\otimes v_j, then ai=bja_i=b_j for all i=1,2,,ni=1,2,\cdots,n and j=1,2,,mj=1,2,\cdots,m.

Then i=1naiT1(ui)T2(vi)=j=1mbjT1(uj)T2(vj)\sum_{i=1}^n a_i T_1(u_i)\otimes T_2(v_i)=\sum_{j=1}^m b_j T_1(u_j)\otimes T_2(v_j).

An example of

Tensor product of linear operators on Hilbert spaces

Let T1T_1 be a linear operator on H1\mathscr{H}_1 and T2T_2 be a linear operator on H2\mathscr{H}_2, where H1\mathscr{H}_1 and H2\mathscr{H}_2 are finite-dimensional Hilbert spaces. The tensor product of T1T_1 and T2T_2 (denoted by T1T2T_1\otimes T_2) on H1H2\mathscr{H}_1\otimes \mathscr{H}_2, such that on decomposable elements is defined by

(T1T2)(v1v2)=T1(v1)T2(v2)=v1,T1(v1)v2,T2(v2)(T_1\otimes T_2)(v_1\otimes v_2)=T_1(v_1)\otimes T_2(v_2)=\langle v_1,T_1(v_1)\rangle \langle v_2,T_2(v_2)\rangle

Extended Dirac notation

Suppose H=Cn\mathscr{H}=\mathbb{C}^n with the standard basis {ei}\{e_i\}.

ej=je_j=|j\rangle and

j1jn=ej1ej2ejn=|j_1\dots j_n\rangle=e_{j_1}\otimes e_{j_2}\otimes \cdots \otimes e_{j_n}=

The Hadamard Transform

Let H=C2\mathscr{H}=\mathbb{C}^2 with the standard basis {e1,e2}={0,1}\{e_1,e_2\}=\{|0\rangle,|1\rangle\}.

The linear operator H2H_2 is defined by

H2=12(1111)=12(00+10+0111)H_2=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}=\frac{1}{\sqrt{2}}(|0\rangle\langle 0|+|1\rangle\langle 0|+|0\rangle\langle 1|-|1\rangle\langle 1|)

The Hadamard transform is the linear operator H2H_2 on C2\mathbb{C}^2.

Singular value and Schmidt decomposition

Definition of SVD (Singular Value Decomposition)

Let T:UVT:\mathscr{U}\to \mathscr{V} be a linear operator between two finite-dimensional Hilbert spaces U\mathscr{U} and V\mathscr{V}.

We denote the inner product of U\mathscr{U} and V\mathscr{V} by ,\langle \cdot, \cdot \rangle.

Then there exists a decomposition of TT

T=d1T1+d2T2++dnTnT=d_1 T_1+d_2 T_2+\cdots +d_n T_n

with d1>d2>>dn>0d_1>d_2>\cdots >d_n>0 and Ti:UVT_i:\mathscr{U}\to \mathscr{V} such that:

  1. TiTj=0T_iT_j^*=0, TiTj=0T_i^*T_j=0 for iji\neq j(
  2. TiR(Ti):R(Ti)R(Ti)T_i|_{\mathscr{R}(T_i^*)}:\mathscr{R}(T_i^*)\to \mathscr{R}(T_i) is an isomorphism with inverse TiT_i^* where R()\mathscr{R}(\cdot) is the range of the operator.

The did_i are called the singular values of TT.

Gram-Schmidt Decomposition 

Basic Group Theory

Finite groups

Definition of group

A group is a set GG with a binary operation \cdot that satisfies the following axioms:

  1. Closure: For all a,bGa,b\in G, abGa\cdot b\in G.
  2. Associativity: For all a,b,cGa,b,c\in G, (ab)c=a(bc)(a\cdot b)\cdot c=a\cdot (b\cdot c).
  3. Identity: There exists an element eGe\in G such that for all aGa\in G, ae=ea=aa\cdot e=e\cdot a=a.
  4. Inverses: For all aGa\in G, there exists an element bGb\in G such that ab=ba=ea\cdot b=b\cdot a=e.

Symmetric group SnS_n

The symmetric group SnS_n is the group of all permutations of nn elements.

Sn={f:{1,2,,n}{1,2,,n} is a bijection}S_n=\{f: \{1,2,\cdots,n\}\to \{1,2,\cdots,n\} \text{ is a bijection}\}

Unitary group U(n)U(n)

The unitary group U(n)U(n) is the group of all n×nn\times n unitary matrices.

Such that A=AA^*=A, where AA^* is the complex conjugate transpose of AA. A=(A)A^*=(\overline{A})^\top.

Cyclic group Zn\mathbb{Z}_n

The cyclic group Zn\mathbb{Z}_n is the group of all integers modulo nn.

Zn={0,1,2,,n1}\mathbb{Z}_n=\{0,1,2,\cdots,n-1\}

Definition of group homomorphism

A group homomorphism is a function Φ:GH\varPhi:G\to H between two groups GG and HH that satisfies the following axiom:

Φ(ab)=Φ(a)Φ(b)\varPhi(a\cdot b)=\varPhi(a)\cdot \varPhi(b)

A bijective group homomorphism is called group isomorphism.

Homomorphism sends identity to identity, inverses to inverses

Let Φ:GH\varPhi:G\to H be a group homomorphism. eGe_G and eHe_H are the identity elements of GG and HH respectively. Then

  1. Φ(eG)=eH\varPhi(e_G)=e_H
  2. Φ(a1)=Φ(a)1\varPhi(a^{-1})=\varPhi(a)^{-1}. aG\forall a\in G

More on the symmetric group

General linear group over C\mathbb{C}

The general linear group over C\mathbb{C} is the group of all n×nn\times n invertible complex matrices.

GL(n,C)={AMn(C) is invertible}GL(n,\mathbb{C})=\{A\in M_n(\mathbb{C}) \text{ is invertible}\}

The map T:SnGL(n,C)T: S_n\to GL(n,\mathbb{C}) is a group homomorphism.

Definition of sign of a permutation

Let T:SnGL(n,C)T:S_n\to GL(n,\mathbb{C}) be the group homomorphism. The sign of a permutation σSn\sigma\in S_n is defined by

sgn(σ)=det(T(σ))\operatorname{sgn}(\sigma)=\det(T(\sigma))

We say σ\sigma is even if sgn(σ)=1\operatorname{sgn}(\sigma)=1 and odd if sgn(σ)=1\operatorname{sgn}(\sigma)=-1.

Fourier Transform in ZN\mathbb{Z}_N.

The vector space L2(ZN)L^2(\mathbb{Z}_N) is the set of all complex-valued functions on ZN\mathbb{Z}_N with the inner product

f,g=k=0N1f(k)g(k)\langle f,g\rangle=\sum_{k=0}^{N-1} \overline{f(k)}g(k)

An orthonormal basis of L2(ZN)L^2(\mathbb{Z}_N) is given by δy,yZN\delta_y,y\in \mathbb{Z}_N.

δy(k)={1if k=y0otherwise\delta_y(k)=\begin{cases} 1 & \text{if } k=y \\ 0 & \text{otherwise} \end{cases}

in Dirac notation, we have

δy=y=y+N\delta_y=|y\rangle=|y+N\rangle

Definition of Fourier transform

Define φk(x)=1Ne2πikx/N\varphi_k(x)=\frac{1}{\sqrt{N}}e^{2\pi i kx/N} for kZNk\in \mathbb{Z}_N. φk:ZC\varphi_k:\mathbb{Z}\to \mathbb{C} is a function.

The Fourier transform of a function FL2(ZN)F\in L^2(\mathbb{Z}_N) such that (Ff)(k)=φk,f(Ff)(k)=\langle \varphi_k,f\rangle is defined by

F=1Nj=0N1k=0N1e2πikj/NkjF=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1} \sum_{k=0}^{N-1} e^{2\pi i kj/N}|k\rangle\langle j|

Symmetric and anti-symmetric tensors

Let Hn\mathscr{H}^{\otimes n} be the nn-fold tensor product of a Hilbert space H\mathscr{H}.

We define the SnS_n on Hn\mathscr{H}^{\otimes n} by

Let ηSn\eta\in S_n be a permutation.

(η)v1v2vn=vη1(1)vη1(2)vη1(n)\prod(\eta)v_1\otimes v_2\otimes \cdots \otimes v_n=v_{\eta^{-1}(1)}\otimes v_{\eta^{-1}(2)}\otimes \cdots \otimes v_{\eta^{-1}(n)}

And extend to Hn\mathscr{H}^{\otimes n} by linearity.

This gives the property that ζ,ηSn\zeta,\eta\in S_n, (ζη)=(ζ)(η)\prod(\zeta\eta)=\prod(\zeta)\prod(\eta).

Definition of symmetric and anti-symmetric tensors

Let H\mathscr{H} be a finite-dimensional Hilbert space.

An element in Hn\mathscr{H}^{\otimes n} is called symmetric if it is invariant under the action of SnS_n. Let αHn\alpha\in \mathscr{H}^{\otimes n}

(η)α=α for all ηSn.\prod(\eta)\alpha=\alpha \text{ for all } \eta\in S_n.

It is called anti-symmetric if

(η)α=sgn(η)α for all ηSn.\prod(\eta)\alpha=\operatorname{sgn}(\eta)\alpha \text{ for all } \eta\in S_n.
Last updated on