BACHELOR S THESIS ALGEBRAS OF OBSERVABLES AND QUANTUM COMPUTING

Podobné dokumenty
Jednoduché polookruhy. Katedra algebry

On large rigid sets of monounary algebras. D. Jakubíková-Studenovská P. J. Šafárik University, Košice, Slovakia

Clifford Groups in Quantum Computing

Gymnázium, Brno, Slovanské nám. 7, SCHEME OF WORK Mathematics SCHEME OF WORK. cz

Database systems. Normal forms

Angličtina v matematických softwarech 2 Vypracovala: Mgr. Bronislava Kreuzingerová

WORKSHEET 1: LINEAR EQUATION 1

Aplikace matematiky. Dana Lauerová A note to the theory of periodic solutions of a parabolic equation

Využití hybridní metody vícekriteriálního rozhodování za nejistoty. Michal Koláček, Markéta Matulová

Gymnázium, Brno, Slovanské nám. 7 WORKBOOK. Mathematics. Teacher: Student:

Základy teorie front III

Search and state transfer by means of quantum walk. Vyhledávání a přenos stavu pomocí kvantové procházky

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

2. Entity, Architecture, Process

GUIDELINES FOR CONNECTION TO FTP SERVER TO TRANSFER PRINTING DATA

Transportation Problem

Brisk guide to Mathematics

VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY RINGS OF ENDOMORPHISMS OF ELLIPTIC CURVES AND MESTRE S THEOREM

Dynamic programming. Optimal binary search tree

Tento materiál byl vytvořen v rámci projektu Operačního programu Vzdělávání pro konkurenceschopnost.

Compression of a Dictionary

Litosil - application

Department of Mathematical Analysis and Applications of Mathematics Faculty of Science, Palacký University Olomouc Czech Republic

Functions. 4 th autumn series Date due: 3 rd January Pozor, u této série přijímáme pouze řešení napsaná anglicky!

Třída: VI. A6 Mgr. Pavla Hamříková VI. B6 RNDr. Karel Pohaněl Schváleno předmětovou komisí dne: Podpis: Šárka Richterková v. r.

Lineární kódy nad okruhy

Aktivita CLIL Chemie III.

DC circuits with a single source

USING VIDEO IN PRE-SET AND IN-SET TEACHER TRAINING

Právní formy podnikání v ČR

Dynamic Signals. Ananda V. Mysore SJSU

Výuka odborného předmětu z elektrotechniky na SPŠ Strojní a Elektrotechnické

Introduction to MS Dynamics NAV

CHAPTER 5 MODIFIED MINKOWSKI FRACTAL ANTENNA

Obrábění robotem se zpětnovazební tuhostí

RNDr. Jakub Lokoč, Ph.D. RNDr. Michal Kopecký, Ph.D. Katedra softwarového inženýrství Matematicko-Fyzikální fakulta Univerzita Karlova v Praze

A constitutive model for non-reacting binary mixtures

Problém identity instancí asociačních tříd

Optimisation. Translated from Czech to English by Libor Špaček. Czech Technical University Faculty of Electrical Engineering

DATA SHEET. BC516 PNP Darlington transistor. technický list DISCRETE SEMICONDUCTORS Apr 23. Product specification Supersedes data of 1997 Apr 16

Pavel Paták Kombinatorika matematických struktur

VY_32_INOVACE_06_Předpřítomný čas_03. Škola: Základní škola Slušovice, okres Zlín, příspěvková organizace

6.867 Machine Learning

AIC ČESKÁ REPUBLIKA CZECH REPUBLIC

A Note on Generation of Sequences of Pseudorandom Numbers with Prescribed Autocorrelation Coefficients

Configuration vs. Conformation. Configuration: Covalent bonds must be broken. Two kinds of isomers to consider

Úvod do kvantového počítání

Chapter 7: Process Synchronization

OPPA European Social Fund Prague & EU: We invest in your future.

Střední průmyslová škola strojnická Olomouc, tř.17. listopadu 49

Let s(x) denote the sum of the digits in the decimal expansion of x. Find all positive integers n such that 1 s(n!) = 9.

Vliv metody vyšetřování tvaru brusného kotouče na výslednou přesnost obrobku

Informace o písemných přijímacích zkouškách. Doktorské studijní programy Matematika

Set-theoretic methods in module theory

PC/104, PC/104-Plus. 196 ept GmbH I Tel. +49 (0) / I Fax +49 (0) / I I

Czech Technical University in Prague DOCTORAL THESIS

EXACT DS OFFICE. The best lens for office work

Tabulka 1 Stav členské základny SK Praga Vysočany k roku 2015 Tabulka 2 Výše členských příspěvků v SK Praga Vysočany Tabulka 3 Přehled finanční

Just write down your most recent and important education. Remember that sometimes less is more some people may be considered overqualified.

ČESKÁ TECHNICKÁ NORMA

Energy vstupuje na trh veterinárních produktů Energy enters the market of veterinary products

WYSIWYG EDITOR PRO XML FORM

Projekt MŠMT ČR: EU peníze školám

Mechanika Teplice, výrobní družstvo, závod Děčín TACHOGRAFY. Číslo Servisní Informace Mechanika:

Jakub Slavík. Nestandardní analýza dynamických systém DIPLOMOVÁ PRÁCE. Univerzita Karlova v Praze Matematicko-fyzikální fakulta


Czech Republic. EDUCAnet. Střední odborná škola Pardubice, s.r.o.

Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise

VYSOKÁ ŠKOLA HOTELOVÁ V PRAZE 8, SPOL. S R. O.

Klepnutím lze upravit styl předlohy. nadpisů. nadpisů.

7 Distribution of advertisement

Faculty of Nuclear Sciences and Physical Engineering Czech Technical University in Prague

SEZNAM PŘÍLOH. Příloha 1 Dotazník Tartu, Estonsko (anglická verze) Příloha 2 Dotazník Praha, ČR (česká verze)... 91

2 Axiomatic Definition of Object 2. 3 UML Unified Modelling Language Classes in UML Tools for System Design in UML 5

kupi.cz Michal Mikuš

2N Voice Alarm Station

PAINTING SCHEMES CATALOGUE 2012

The Czech education system, school

HOMOLOGICAL PROJECTIVE DUALITY

Matematicko-fyzikálny časopis

MODELY STOCHASTICKÉHO PROGRAMOVÁNÍ A JEJICH APLIKACE STOCHASTIC PROGRAMMING MODELS WITH APPLICATIONS

Dynamic Development of Vocabulary Richness of Text. Miroslav Kubát & Radek Čech University of Ostrava Czech Republic

Škola: Střední škola obchodní, České Budějovice, Husova 9. Inovace a zkvalitnění výuky prostřednictvím ICT

1, Žáci dostanou 5 klíčových slov a snaží se na jejich základě odhadnout, o čem bude následující cvičení.

Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise

Vánoční sety Christmas sets

Izolační manipulační tyče typ IMT IMT Type Insulated Handling Rod

Transformers. Produkt: Zavádění cizojazyčné terminologie do výuky odborných předmětů a do laboratorních cvičení

Britské společenství národů. Historie Spojeného království Velké Británie a Severního Irska ročník gymnázia (vyšší stupeň)

Caroline Glendinning Jenni Brooks Kate Gridley. Social Policy Research Unit University of York

Gymnázium, Brno, Slovanské nám. 7 WORKBOOK. Mathematics. Student: Draw: Convex angle Non-convex angle

CZ.1.07/1.5.00/

Projekt: ŠKOLA RADOSTI, ŠKOLA KVALITY Registrační číslo projektu: CZ.1.07/1.4.00/ EU PENÍZE ŠKOLÁM

POPIS TUN TAP. Vysvetlivky: Modre - překlad Cervene - nejasnosti Zelene -poznamky. (Chci si ujasnit o kterem bloku z toho schematu se mluvi.

Výukový materiál zpracovaný v rámci operačního programu Vzdělávání pro konkurenceschopnost

CHAIN TRANSMISSIONS AND WHEELS

Metoda CLIL. Metody oddělování složek směsí FILTRACE FILTRATION

VŠEOBECNÁ TÉMATA PRO SOU Mgr. Dita Hejlová

PRAVIDLA ZPRACOVÁNÍ STANDARDNÍCH ELEKTRONICKÝCH ZAHRANIČNÍCH PLATEBNÍCH PŘÍKAZŮ STANDARD ELECTRONIC FOREIGN PAYMENT ORDERS PROCESSING RULES

ČESKÁ ZEMĚDĚLSKÁ UNIVERZITA V PRAZE

Transkript:

BACHELOR S THESIS ALGEBRAS OF OBSERVABLES AND QUANTUM COMPUTING Vojtěch Teska July 4, 2016

Název práce: Algebry pozorovatelných a kvantové počítání Autor: Vojtěch Teska Obor: Matematické inženýrství Zaměření: Matematická fyzika Druh práce: Bakalářská práce Vedoucí práce: Prof. Ing. Jiří Tolar, DrSc., Katedra fyziky, Fakulta jaderná a fyzikálně inženýrská, České vysoké učení technické v Praze Konzultanti: Ing. Petr Novotný, PhD, Katedra fyziky, Fakulta jaderná a fyzikálně inženýrská, České vysoké učení technické v Praze Mgr. Miroslav Korbelář, PhD, Katedra matematiky, Fakulta elektrotechnická, České vysoké učení technické v Praze Abstrakt: Nejprve jsou uvedeny základní pojmy a postuláty kvantové teorie (stavy, pozorovatelné, pravděpodobnosti přechodu) a blíže prozkoumány stavy a pozorovatelné na konečněrozměrných Hilbertových prostorech. Podrobně je studována asociativní C*-algebra M n (C) s Hilbert- Schmidtovým skalárním součinem. Jsou zkoumány jemné gradace této algebry získané pomocí maximálních grup jejích komutujících *-automorfismů (MAD-grup). Je ukázáno, že MAD-grupy korespondují s unitárními Ad-grupami, je podána jejich klasifikace a jsou uvedeny přísušné jemné gradace v jednoduchých případech. Na závěr je ilustrována kvantová komplementarita pomocí prvků Pauliho grupy. Klíčová slova: kvantová mechanika, C*-algebra, jemná gradace, MAD-grupa, Pauliho grupa, kvantová komplementarita Title: Algebras of Observables and Quantum Computing Author: Vojtěch Teska Abstract: A basic overview of quantum theory terms and axioms (states, observables, probabilities of transition) is presented, then states and observables in finite dimensional Hilbert spaces are examined in more detail. Associative C*-algebra M n (C) with Hilbert-Schmidt inner product is closely inspected. Fine gradings of this algebra obtained with maximal groups of its commuting *-automorphisms (MAD-groups) are studied. A correspondence between MAD-groups and unitary Ad-groups is shown, a classification of unitary Ad-groups is given and corresponding fine gradings in simple cases are presented. Lastly, quantum complementarity is illustrated using elements of Pauli s group. Key words: quantum mechanics, C*-algebra, fine grading, MAD-group, Pauli s group, quantum complementarity i

Prohlášení Prohlašuji, že jsem svou bakalářskou práci vypracoval samostatně a použil jsem pouze podklady uvedené v přiloženém seznamu. Nemám závažný důvod proti použití tohoto školního díla ve smyslu 60 zákona č. 121/2000 Sb., o právu autorském, o právech souvisejících s právem autorským a o změně některých zákonů (autorský zákon). V Praze dne: Podpis: ii

Contents Acknowledgments 2 Introduction 3 Notation 4 1 Algebras of observables in quantum mechanics 5 1.1 Fundamental notions of quantum theory...................... 5 1.2 Normed algebras.................................. 6 1.3 Quantum mechanics in finite dimension...................... 9 2 Associative algebras M n (C) of complex n n matrices 10 2.1 Involution, inner product and norms........................ 10 2.2 Irreducibility, diagonalizability and commutativity................. 12 2.3 Schur s decomposition and diagonalizability of normal matrices......... 15 2.4 Automorphisms of M n (C)............................. 17 3 Classification of fine gradings of M n (C) 20 3.1 Gradings and automorphisms of *-algebras.................... 20 3.2 Classification of MAD-groups of M n (C)..................... 21 3.3 Unitary Ad-groups and corresponding fine gradings of M n (C).......... 26 3.4 Examples of gradings induced by *-automorphisms of M n (C).......... 29 4 Orthogonal decompositions and quantum complementarity 30 4.1 Quantum bits.................................... 30 4.2 Complementarity structures............................. 31 4.3 Generalized quantum bits and physical interpretation............... 32 Conclusion 33 1

Acknowledgments I would like to thank prof. Jiří Tolar for advice, consultations and corrections in the text. I also thank my parents for support during my studies. 2

Introduction The aim of this thesis is to describe the fundamental notions and necessary mathematical structures used to build the quantum theory with focus on operators in finite dimension. It shall be shown that these are represented by the algebra M n (C) of complex matrices with standard matrix multiplication. The operation of hermitian adjoining in M n (C) will be defined and it will be shown that it satisfies the definition of involution given in the first chapter. We shall define Hilbert-Schmidt inner product in M n (C) as an analogue of standard inner product in C n and show that the norm defined by this inner product is not compatible with the definition of a C*-algebra and that it is necessary to equip M n (C) with operator norm in order to satisfy all the conditions of this definition. After doing so, we proceed to a closer inspection of M n (C), examining commutativity on its irreducible subsets. We prove that a matrix commuting with all elements of an irreducible set must be a scalar multiple of identity matrix. Furthermore, we discover that a set of matrices from M n (C) is simultaneously diagonalizable if and only if it is a set of mutually commuting diagonalizable matrices. We conclude the second chapter with proving that all normal matrices are diagonalizable and that all automorphisms of M n (C) are inner. In the third chapter, a grading of an arbitrary *-algebra is defined and the relationship between maximal groups of commuting *-automorphisms (MAD-groups) and gradings is examined. In the case of M n (C), we show that there is a correspondence between unitary Ad-groups and MADgroups. A classification theorem decomposing unitary Ad-groups into tensor products of Pauli s groups P k and diagonal unitary groups U D (k) is proven and respective gradings of M n (C) are studied in simple cases. The thesis is concluded with a brief overview of quantum computing and an illustration of quantum complementarity using elements of Pauli s group. 3

Notation n the set {1, 2, 3,..., n} (, ) inner product e multiplicative identity in algebra A θ zero vector involution σ(a) spectrum of a linear operator A B A matrix of linear operator A in the basis B H hermitian adjoining I identity matrix, Hilbert-Schmidt inner product E Euclidean norm in C n op operator norm C p,q vector space of complex p q matrices O zero matrix Eig(A, λ) eigenspace of a linear operator A corresponding to eigenvalue λ W V W is a subspace of the vector space V Ad A inner automorphism of M n (C) generated by an invertible matrix A direct sum U(n) the group of n n unitary matrices P k k k Pauli s group tensor product G Ad the set of unitary matrices generating MAD-group G Ad(G ) the set of *-automorphisms generated by unitary Ad-group G U D (n) the group of diagonal unitary n n matrices {, } (ωn) ω n -commutant {, } commutant non-zero element in a matrix 4

Chapter 1 Algebras of observables in quantum mechanics 1.1 Fundamental notions of quantum theory All physical systems are described in terms of states and observables. In classical mechanics, both are represented by functions on a given system s phase space which, in the case of Hamiltonian mechanics, is a symplectic manifold. The fundamental role in description of quantum systems is played by Hilbert spaces and their one-dimensional subspaces [1]: Definition 1 (Hilbert space). Hilbert space is a vector space with inner product (, ) which is complete with respect to the metric generated by its inner product. Definition 2 (Ray). Let H be a complex Hilbert space. A ray in H is any one-dimensional subspace in H. To avoid pathological properties in mathematical description, it is assumed that these Hilbert spaces are complex and contain a countable dense subset, and therefore are by definition separable [1]. Thus the first fundamental axiom of quantum mechanics is postulated: Axiom 1. To each quantum system S belongs a separable complex Hilbert space H which shall be called the state space belonging to S. A proper definition of state in quantum mechanics is more complicated than in Hamiltonian mechanics because of the probabilistic character of microscopic processes, which are greatly affected by measurement, and must therefore be executed in large quantities, resulting in the impossibility to distinguish which states were originally manipulated in the experiment. Assuming that we can perform just one experiment and determine the system s state, we can proclaim it a pure state of the examined system and postulate the second fundamental axiom of quantum mechanics [1]: Axiom 2. Each pure state of a given system S is represented by some ray Φ in its state space H where the probability of transition between two states Φ and Ψ shall be denoted P (Φ, Ψ) and is given by: (ϕ, ψ) 2 P (Φ, Ψ) = ( ϕ ψ ) 2 (1.1) where ϕ Φ and ψ Ψ. 5

Since each ray Ψ in H is generated by a unit vector ψ : Ψ = {αψ : α C, ψ = 1}, it is easy to see that P (Φ, Ψ) does not depend on the choice of ϕ and ψ, so both vectors may be chosen with unit norm. This fact simplifies the equation (1.1) to P (ϕ, ψ) = (ϕ, ψ) 2. In addition, it implies that we can represent pure states by unit vectors generating their corresponding rays. To complete the mathematical description of a quantum system, a definition of observables is needed. Unlike in classical physics, observables in quantum mechanics are described by different mathematical objects than states, namely linear operators on H with some additional properties [1]. It is possible to show that for each linear operator ˆT on H such that its domain D ˆT is dense in H and for each y H there exists at most one y H such that (y, T x) = (y, x) for each x D ˆT. If such y exists, the Hermitian adjoint ˆT of ˆT can be defined in the following way: D ˆT = {y H : ( y H)((y, T x) = (y, x))( x D ˆT )} ˆT y = y for all y D ˆT. For the purposes of quantum mechanics, the following definition is useful: Definition 3. If D ˆT is dense in H and ˆT = ˆT, then ˆT is called self-adjoint. In order to be able to predict outcomes of measurements, it is needed to somehow obtain numbers from operators which map a subset of a Hilbert space H into H. For that purpose we define the spectrum of an operator [1]: Definition 4. The spectrum σ( ˆT ) of a linear operator ˆT on a Hilbert space H is the set of λ C such that ( ˆT λî) is not a bijection. At this point, we can postulate the final fundamental axiom of quantum mechanics [1]: Axiom 3. Each observable A of a given system S is represented by a self-adjoint linear operator  on its state space H, and possible outcomes of measurements of any given observable A are elements of the spectrum σ(â) of its corresponding operator Â. It follows from the theory of linear operators on Hilbert spaces, that since  is self-adjoint, all possible outcomes of measurements are real numbers, also it is clear from Definition 3 that the domain of any given observable must be dense in the state space H. 1.2 Normed algebras The operations performed with linear operators, which are acting as observables in quantum mechanics, can be generalized into interesting algebraic structures, several of which will be introduced in this section. However, to start, it is needed to clarify what we mean by operation. Beginning with an arbitrary set M, we define binary operation in M as a map φ : M M M. We say that φ is associative if φ(φ(a, b)c) = φ(a, φ(b, c)) for all a, b, c M and we say that φ is commutative if φ(a, b) = φ(b, a) for all a, b M. Let us consider a set R equipped with two binary operations φ a, φ b which shall be called addition and multiplication and let us denote φ a (a, b) = a + b, φ b (a, b) = ab. The triplet (R, φ a, φ b ) shall be called a ring if the following conditions are satisfied [1]: 1. (R, φ a ) is a commutative group 2. a(b + c) = ab + ac for all a, b, c R 6

3. (a + b)c = ac + bc for all a, b, c R If there exists e R such that ae = ea = a for all a R, then e shall be called a multiplicative identity. Now, let A be a complex vector space. If we introduce a new bilinear binary operation in A, called multiplication, then A becomes a ring which shall be called a complex linear algebra. We say that A is associative resp. commutative if its multiplication is associative resp. commutative. From this point forward, the term complex linear algebra will be abbreviated to algebra. To generalize the process of inverting operators, we must realize that in a non-specific algebra, the existence of a multiplicative identity is not guaranteed, therefore it must be demanded in the definition of an inverse element. Definition 5. Let A an algebra with multiplicative identity and let a A. Then a is called invertible if there exists a 1 A, called the inverse element of a, such that a 1 a = aa 1 = e. By generalizing the operation of hermitian adjoining as a map on the state space to algebras, the following definition is obtained [1], [2]: Definition 6. Let A be an algebra. Involution in A is a map : A A satisfying: 1. (ξa + b) = ξa + b for all ξ C and for all a, b A 2. (a ) = a for all a A 3. (ab) = b a for all a, b A The element a will be called the adjoint element of a and the pair (A, ) will be called an involutive algebra or *-algebra. Note that (a ) = a implies that is its own inverse and therefore it must be a bijection. Having defined the adjoint element, we are able to proceed analogously as with operators, resulting in the following definition [1]: Definition 7. Let A be a *-algebra and let a A. Then a is called: 1. normal iff aa = a a 2. hermitian iff a = a 3. a projector iff a = a = a 2. 4. Furthermore if A is an algebra with multiplicative identity and a = a 1, then a is called unitary. As in the case of studying the relations between two given vector spaces, it is needed to define a map which preserves all algebraic operations (which in the case of *-algebras includes involution) [1]. Definition 8. Let A and B be algebras. A map ψ : A B is called a morphism iff 1. ψ(ξa + b) = ξψ(a) + ψ(b) for all ξ C and for all a, b A 2. ψ(ab) = ψ(a)ψ(b) for all a, b A 7

If ψ is a bijection, then it is called an isomorphism, furthermore if A = B, then ψ is called an automorphism. Remark 1. Let A, B be algebras with multiplicative identity and let ψ : A B be a morphism. Since a = ae = ea for all a V, we obtain that ψ(a) = ψ(e)ψ(e) = ψ(e)ψ(a), thus ψ(e) = e. Similarly, ψ(a) = ψ(a+θ) = ψ(a)+ψ(θ) for all a V, proving that ψ(θ) = θ, where θ denotes the zero vector. Definition 9. Let A and B be *-algebras. A morphism ψ : A B is called a *-morphism iff ψ(a ) = (ψ(a)) for all a A. If ψ is a bijection, then it is called a *-isomorphism, furthermore if A = B, then ψ is called a *-automorphism. In a normed vector space, two well known relations, namely a + b a + b, between the norm of a sum of two vectors and the norms of its summands, and θ = 0 for the norm of the zero vector, are postulated. Similar relations are given in the definition of a normed algebra with regard to multiplication [1]: Definition 10. A normed algebra is an algebra A satisfying: 1. A is a normed vector space with the norm 2. ab a b for all a, b A 3. if A contains a multiplicative identity e, then e = 1. If A is complete with respect to its norm, then it is called a Banach algebra. To finish, the relation between the norm and involution must be discussed, defining another two important types of algebras [1]. Definition 11. A normed involutive algebra A is called a normed *-algebra iff a = a for all a A. A normed *-algebra complete with respect to its norm is called a Banach *-algebra. Definition 12. A Banach *-algebra A shall be called a C*-algebra iff a a = a 2 for all a A. Note that a 2 = a a implies a = a. Since a 2 = a a a a i.e. a a, analogously a a and therefore a = a. Remark 2. In quantum mechanics with a separable Hilbert space H, the mathematical framework can be generalized in the form of C*-algebra postulate [3]: A quantum system is characterized by a triplet (S, A,, ) where 1. A, the set of its observables, is the collection of all the hermitian elements h of a C*-algebra A; 2. S, the set of its states, is the collection of all real-valued, positive linear functionals φ on A, normalized by the condition φ, h = 1; 3., is the prediction rule, a map, : S A R which attributes to every pair (φ, h), the value φ, h = φ(h), interpreted as the expectation of the observable h when the system is in a state φ. 8

1.3 Quantum mechanics in finite dimension Quantum systems with finite dimensional state space (such as when describing the spin of a particle) can be described by assigning vector space C n of ordered n-tuples of complex numbers x 1 x 2 x =. x n with standard inner product (x, y) = n x iy i. Observable A, which is a hermitian linear operator on C n, is represented by its matrix B A in a chosen basis B = (e k ) n k=1 of Cn, defined as A ij = e # i (Ae j), where e # k denotes vectors of the dual basis of B. Using this definition, we can see that composing two observables A, B is equivalent to matrix multiplication [4]: B (AB) ij = e # i (ABe j) = e # i (A(Be j)) = e # i (A( e # k (Be j)e k )) = e # i (A( ( B B) kj e k )) = = B B kj e # i (Ae k) = k=1 = k=1 B B kj e # i ( e # l (Ae k)e l ) = k=1 l=1 B A B lk B kj e # i (e l) = k,l=1 k=1 B B kj e # i ( B A lk e l ) = k=1 B A B ik B kj. Furthermore, by Riesz s lemma [1], [4] for any continuous linear functional ϕ on a Hilbert space H there exists a vector x H such that ϕ can be written uniquely in the form ϕ(y) = (x, y) for all y H. Assuming orthonormality of B, this statement allows to write the matrix B A in the form (e 1, Ae 1 ) (e 1, Ae 2 )... (e 1, Ae n ) B (e 2, Ae 1 ) (e 2, Ae 2 )... (e 2, Ae n ) A =....... (e n, Ae 1 ) (e n, Ae 2 )... (e n, Ae n ) Lastly, matrix multiplication is associative [4], i.e. for any three n n matrices A, B, C: k=1 l=1 (A(BC)) ij = A ik (BC) kj = A ik B kl C lj = k=1 k,l=1 (AB) il C lj = (AB)C, l=1 Thus the vector space containing all observables of a given finite dimensional system can be represented by the vector space C n,n of n n matrices which together with the operations of matrix multiplication forms an associative algebra, which shall be denoted M n (C). Properties of this algebra are studied in the following chapters. 9

Chapter 2 Associative algebras M n (C) of complex n n matrices 2.1 Involution, inner product and norms In this section we define involution on M n (C) in such a way so that it forms a C*-algebra and provide a definition of an inner product on M n (C). We begin by observing that the operation of Hermitian adjoining of a given matrix A A H, defined by (A H ) ij = A ji satisfies the definition of an involution: 1. (ξa + B) H ij = (ξa + B) ji = ξa ji + B ji = ξ(a H ) ij + (B H ) ij 2. ((A H ) H ) ij = (A H ) ji = A ij = A ij 3. (AB) H ij = (AB) ji = n k=1 A jkb ki = n k=1 A ikb kj = n k=1 (BH ) jk (A H ) ki = (B H A H ) ij for all i, j n and for all ξ C, therefore the pair (M n (C), H ) constitutes a *-algebra where multiplicative identity is the identity matrix I (the zero vector is the zero matrix O, O ij = 0 for all i, j n). Note that the Hermitian adjoining in general maps the vector space C p,q of p q matrices bijectively onto C q,p. In the following, the term involution will refer to the operation of Hermitian adjoining. Note that if A M n (C) is invertible, then inversion commutes with involution: AA 1 = A 1 A = I (A 1 ) H A H = A H (A 1 ) H = I H = I (A 1 ) H = (A H ) 1, justifying the use of abbreviated notation (A 1 ) H = (A H ) 1 = A H. Using involution, it is possible to define an analogue of the standard inner product on C n. Define the trace of a matrix A as the sum of its diagonal elements, i.e. Tr(A) = n A ii for all A M n (C). Definition 13 (Hilbert-Schmidt inner product). Let A, B M n (C). The Hilbert-Schmidt inner product of A and B is defined as A, B = Tr(A H B). It is easy to see that A, B = n i,j=1 A ijb ij, hence, is linear in the second argument and that A, B = B, A, moreover A, A = n i,j=1 A ij 2 0 and A, A = 0 A = O and therefore it is a strictly positive sesquilinear form, inducing the norm A = A, A for all 10

A M n (C). Furthermore, M n (C) is complete with respect to this norm, but I = n and so M n (C) paired with cannot constitute a normed algebra. To satisfy all three conditions of a normed algebra (Definition 10), another norm is needed [1]: Definition 14 (Operator norm). Let A M n (C) and let x C n. Operator norm of A is defined: A op = sup{ Ax E : x E = 1}, where x E denotes the norm induced by the standard inner product in C n. Since all norms on finite dimensional vector spaces are equivalent [1], M n (C) is also complete with respect to the operator norm. Its properties are summarized in the following corollary [1], [5]: Corollary 1. Let A, B M n (C). Then: 1. AB op A op B op 2. A H op = A op 3. A H A op = A 2 op 4. I op = 1 Proof. First, let a, b C n = C n,1 with standard inner product. Then (a, b) = a H b and so (Aa, b) = (Aa) H b = (a H A H )b = a H (A H b) = (a, A H b), for A H we obtain (a, Ab) = (A H a, b). { } 1. Let x θ then ABx E = ABx E Bx E Bx E Bx E sup ABx E Bx E : Bx C n \ {θ}. Now substituting z = Bx, it follows that Az E z E = A( z z E ) and therefore { } sup ABx E Bx E : Bx C n \ {θ} = A op This fact implies that ABx E A op Bx E, and the same is true for suprema, thus proving the first part of the corollary. 2. The Cauchy-Schwarz inequality (a, b) a b implies sup{ (a, b) : b = 1} = a. It follows that and therefore A 2 op = sup{(ax, Ax) : x = 1} = sup{(a H Ax, x) : x = 1} sup{ A H Ax : x = 1} A 2 op A H A op A H op A op. (2.1) By dividing both sides by A op, we obtain A op A H op and by doing the same for A H, the inequality A H op A op is obtained, giving the desired result. 3. Applying the previous result to inequality (2.1) proves the assertion. A 2 op A H A op A H op A op = A 2 op 11

4. By definition: I op = sup{ Ix E : x E = 1} = sup{ x E : x E = 1} = 1. Completeness of M n (C) with respect to the operator norm and points 1, 3, and 4 of Corollary 1 imply that M n (C) paired with operator norm forms a C*-algebra. 2.2 Irreducibility, diagonalizability and commutativity In this section, the relationship between multiplicative commutativity and other properties of commuting matrices is investigated, beginning with commutation on irreducible sets [6]. Definition 15. Let U be an arbitrary subset of M n (C). The set U is said to be reducible if fixed positive integers p, q and a fixed invertible matrix S exist such that for each A U, ( ) S 1 A11 A AS = 12 O A 22 where A 11 C p,p, A 12 C p,q, A 22 C q,q and O is zero q p matrix. Otherwise U is said to be irreducible. Lemma 1 (Schur s lemma). Let U be an irreducible subset of M n (C) and let M M n (C) be a fixed matrix such that for each A U there exists à M n(c) satisfying AM = MÃ. Then either M = O or M is invertible. Furthermore if à = A (so that M commutes with each element of U), then there exists λ C such that M = λi. Proof. Suppose that rank(m) = r < n, and write ( ) Ir O M = P Q O O where P, Q are invertible an I r is r r identity matrix. Then for each A U ( ) ( ) AM = Mà (P 1 Ir O Ir O AP ) = (Q 1 ÃQ) (2.2) O O O O Put ( ) (P 1 A11 A AP ) = 12 A 21 A 22 ) (Ã11 (Q 1 à ÃQ) = 12 à 21 à 22 where A 11, Ã11 are r r matrices, A 12, Ã12 are r (n r) matrices, A 21, Ã21 are (n r) r matrices and A 22, Ã22 are (n r) (n r) matrices. Then (2.2) implies that ( ) ) A11 O (Ã11 à = 12. (2.3) O O O A 21 Thus A 21 = O, which contradicts irreducibility of U. Hence r must be 0 or n, and so either M = O or M is invertible. This proves the first part of the lemma. 12

Now suppose AM = MA for each A U, and choose λ as any eigenvalue of M (which must exist due to the fundamental theorem of algebra) and let x be its corresponding eigenvector. Then (M λi)x = Mx λx = λx λx = θ = 0x hence 0 σ(m λi) i.e. M λi is singular. In addition A(M λi) = AM A(λI) = MA (λi)a = (M λi)a M λi = O M = λi. Thus proving the second part of the lemma. An important example of an irreducible set is given in the following corollary: Corollary 2. M n (C) is an irreducible set. Proof. Let S M n (C) be an arbitrary ( ) invertible matrix, p, q ˆn arbitrary numbers and R O O C q,p, R O. Then the matrix S S R O 1 satisfies ( ) ( ) ( ) S 1 O O S S 1 O O A11 A S = 12 R O R O O A 22 where A 11 C p,p, A 12 C p,q and A 22 C q,q. Thus M n (C) is an irreducible set. The following theorem describes the set of n n matrices commuting with all elements of M n (C), which will be needed in the subsequent chapters. Theorem 1. Let M M n (C). Then M commutes with all elements of M n (C) if and only if M = λi for some λ C. Proof. It is trivial that M = λi commutes with all elements of M n (C). To prove the converse we apply Lemma 1 on the set M n (C). A simple way of describing a linear operator A on a vector space V n of finite dimension n is by giving the image of vectors e 1, e 2, e 3,..., e n composing a basis of V n. A is called a diagonalizable operator iff there exists a basis B = (e i ) n and λ i C such that Ae i = λ i e i for all i n. The basis B is called a diagonal basis of A. The set of vectors in V n satisfying Ax = λx for a given linear operator A is called an eigenspace of A corresponding to λ C [4]. It shall be denoted Eig(A, λ). It follows that any matrix X M n (C) can be proclaimed a matrix of some linear operator on C n in the same manner as in Chapter 1, so this definition can be carried over to M n (C), i.e. X is diagonalizable iff there exists a diagonal basis of C n for the operator defined by X. The following definitions and lemma are needed to study the relationship between commutativity and diagonalizability of operators. Definition 16. Let V n be a vector space of finite dimension n and let M be a set of linear operators on V n. Then M is called simultaneously diagonalizable iff there exists a basis B of V n such that B is a diagonal basis of all A M. Definition 17. Let A be a linear operator on a vector space V n of finite dimension n and let W be a subspace of V n. We say that W is A-invariant iff A(W) W [4]. 13

In the following, the relation W is a subspace of V will be denoted W V. Lemma 2. Let V n be a vector space of finite dimension n such that V n = k W k, where W i V for all i k and let B be a diagonalizable linear operator on V n such that W i is B-invariant for all i ˆk. Then B is diagonalizable on all W i, where i k. Proof. First, we prove that the above statement is true for k = 2. We shall denote W 1 = U and W 2 = W. Let V n = span((f i ) n ), where (f i) n is the diagonal basis of B. Let λ i be eigenvalue of B corresponding to f i for every i ˆn. Since it is possible to uniquely decompose each f i in the following form: f i = u i + w i, where u i U and w i W, we obtain Bf i = λ i f i = λ i (u i + w i ) = λ i u i + λ i w i Bf i = B(u i + w i ) = Bu i + Bw i. Since λ i u i, Bu i U and λ i w i, Bw i W, we obtain Bu i = λ i u i and Bw i = λ i w i. We now prove that span((u i ) n ) = U and span((w i) n ) = W. Since span((u i) n ) U and span((w i ) n ) W, dim(span((u i ) n )) dim U dim(span((w i ) n )) dim W Assume dim(span((u i ) n )) + r = dim U, where r dim U. Now span((u i ) n ) span((w i ) n ) = V n dim U r + dim(span((w i ) n )) = n which is equivalent to dim(span((w i ) n ) = n dim U + r = dim W + r > dim W, a contradiction. Therefore by choosing dim U linearly independent vectors from (u i ) n and dim W linearly independent vectors from (w i ) n we obtain diagonal bases of B in U and W respectively. In the general case V n = k W k we apply this result (n 1)-times on vector spaces k 1 V n = ( thus proving the lemma. k 2 W k ) W k V (2) = ( k 3 W k ) W k 1 V (3) = ( W k ) W k 2 etc. The following theorem describes how diagonalizability and commutativity are related. Also note that an analogous theorem holds true for matrices from M n (C). Theorem 2. Let V n be a vector space of finite dimension n and let M be an arbitrary set of linear operators on V n. Then M is a set of simultaneously diagonalizable operators if and only if M is a set of mutually commuting diagonalizable operators. Proof. Let A, B M. If there exists a basis D in which both matrices D A and D B are diagonal and if D = (x i ) n and if λ(a) i and λ (B) i are the eigenvalues of A and B corresponding to x i respectively, then we obtain: ABx i = A(λ (B) i x i ) = λ (B) (Ax i ) = λ (B) i i λ (A) i x i = λ (A) 14 i λ (B) i x i = λ A i Bx i = B(λ (A) i x i ) = BAx i.

Therefore for any y V n, y = n α ix i, where α i C for all i ˆn: ABy = AB α i x i = α i ABx i = α i BAx i = BA We have proved that simultaneously diagonalizable operators commute. To prove the converse, let x (B) which we shall denote λ (B) j Therefore Ax (B) i,j α i x i = BAy i,j denote i-th eigenvector corresponding to j-th eigenvalue of B,, and let η (B) be its geometrical multiplicity. Then BAx (B) i,j so y j = η(b) j k=1 β kx (B) k,j, then Ay j = j = ABx (B) i,j = A(λ (B) j x (B) i,j ) = λ(b) j Ax (B) i,j Eig(B, λ (B) j ). Furthermore let for each λ (B) j σ(b) be y j Eig(B, λ (B) j ), j η (B) k=1 η j (B) β k x (B) k,j = y j = β k Ax (B) k,j k=1 (B) ηj = k=1 β k λ j x (B) k,j Eig(B, λ(b) j ) Which means that Eig(B, λ (B) j ) is A-invariant for each λ (B) j σ(b). Due to the fact that V n = σ(b) j=1 Eig(B, λ (B) j ), according to Lemma 2, there exists a basis B of V n such that B A is diagonal on each Eig(B, λ (B) j ) and therefore also on V n. It is easy to see that B B is also diagonal. 2.3 Schur s decomposition and diagonalizability of normal matrices It is well-known that every diagonalizable matrix B M n (C) satisfies B = P DP 1, where P denotes the transition matrix between the original basis and the diagonal basis of A and D denotes a diagonal matrix. More general decomposition holds for every A M n (C), as given by the following theorem [7]. Theorem 3 (Schur s decomposition theorem). Let A M n (C). Then there exists a unitary matrix U M n (C) and an upper triangular matrix T M n (C) such that U H AU = T. Proof. This theorem is obviously true for n = 1. Assume that the theorem holds for n 1,i.e. for every A 1 M n 1 (C) there exists a unitary matrix U 1 M n 1 (C) and an upper triangular matrix T 1 M n 1 (C) such that U1 HA 1U 1 = T 1. Let x 1 be an eigenvector of A corresponding to λ σ(a). Without loss of generality, assume that x 1 E = 1. Then by applying the Gramm- Schmidt algorithm, there exists an orthonormal basis of C n containing x 1 with respect to the standard inner product, denote its vectors by x 1, x 2,..., x n. Put Q = (x 1, x 2,..., x n ). It is easy to see that Q M n (C) is unitary. Then x H 1 x H 2 x H Q H AQ = x H 2 A(x 1, x 2,..., x n ) = x 3. H (Ax 1, Ax 2,..., Ax n ) = 3. x H 1 x H n x H n 15

x H 1 x H 2 ( ) = x H λ q H (λx 1, Ax 2, Ax 3,..., Ax n ) = θ A 1 3. x H n ( ) 1 θ where q C n 1 H, A 1 M n 1 (C). Now let U = Q Since both U θ U 1 and Q are unitary, it 1 follows that U is unitary. Now it remains to prove that U H AU is upper triangular: ( ) ( ) ( ) ( ) ( ) U H 1 θ H AU = θ U1 H Q H 1 θ H 1 θ H λ q H 1 θ H AQ = θ U 1 θ U1 H = θ A 1 θ U 1 ( λ q = H ) ( U 1 λ q θ U1 HA = H ) U 1 1U 1 θ T 1 Since it is assumed that T 1 is upper triangular, the proof is complete. As a consequence of this theorem, we prove another equivalent characterization of normal matrices: Lemma 3. Let T M n (C) then T be an upper triangular matrix. Then T is normal iff it is diagonal. Proof. It is( obvious) that this statement is true for n = 1. We proceed by induction, assuming α z H that T =, where α C, z C O S n 1 and S C n 1,n 1 is diagonal. Therefore ( ) α O T H = z S H. The equation T H T = T T H gives: ( α 2 + z H z z H T H ) ( α 2 αz Sz SS H = H ) αz SS H which implies that z E = 0 and thus z = θ. The assumption that S is diagonal gives the desired result. Lemma 4. Let A, U M n (C), furthermore let A be normal and let U be unitary. Then U H AU is normal iff A is normal. Proof. Assuming that A is normal: U H AU(U H AU) H = U H AUU H A H U = U H AA H U = = U H A H AU = U H A H UU H AU = (U H AU) H U H AU. Assuming the converse, U H AU(U H AU) H = (U H AU) H U H AU gives the result U H AA H U = U H A H AU and multiplying this equation by U from the left and by U H from the right gives AA H = A H A, i.e. A is normal. Theorem 4. Let A M n (C). Then A is normal iff there exists a unitary matrix U M n (C) such that A is diagonalizable by U. 16

Proof. Let A M n (C), then by Schur s decomposition theorem, it can be written in the form A = U H T U, where U is unitary and T is upper triangular. By Lemma 4 A is normal iff T is normal. Lemma 3 states that T is normal iff T is diagonal, so A is normal iff it is diagonalizable by U. Alternative proof of the above theorem can be found in [8]. 2.4 Automorphisms of M n (C) Definition 18. Let A be an invertible element of M n (C). Inner automorphism Ad A is defined Ad A (X) = A 1 XA for all X M n (C) [9]. Note that linearity of Ad A follows from the linearity of matrix multiplication and that Ad A (XY ) = A 1 (XY )A = (A 1 XA)(A 1 Y A) = Ad A (X)Ad A (Y ), hence Ad A preserves multiplication. Since Ad A (X) = A 1 XA = O X = O, thus ker(ad A ) = {O} and therefore it is a bijection, so Ad A satisfies the definition of an automorphism for every invertible A M n (C). The following theorem describes the properties of inner automorphisms in relation to their generating matrices [9], [10]. Theorem 5. Let A, B be invertible elements of M n (C). Then Proof. 1. Ad A and Ad B commute iff there exists ω n C such that AB = ω n BA and ω n n = 1. 2. Ad A is a *-automorphism iff it is generated by a unitary matrix. 3. Ad A is diagonalizable iff A is diagonalizable. 1. By definition, Ad A (Ad B ) = Ad B (Ad A ) A 1 B 1 XAB = B 1 A 1 XBA for all X M n (C), i.e. BAB 1 A 1 X = XBAB 1 A 1, so by Theorem 1, BAB 1 A 1 = 1 ω n I i.e. AB = ω n BA. Equality of determinants implies that det(ab) = ω n n det(ba) and therefore ω n n = 1 2. First, let A be unitary. It follows that (Ad A (X)) H = (A 1 XA) H = A H X H A H = A 1 X H A = Ad A (X). Assuming the converse, i.e. Ad A (X H ) = (Ad A (X)) H is equivalent to A 1 X H A = A H X H A H AA H X H = X H AA H 17

and therefore by Theorem 1 there exists λ C such that AA H = λi, where (λi) ii = λ = A ij A H ji = A ij A ij = A ij 2 > 0. The fact AA H AA H = λ 2 I = AAA H A H shows that A is normal and therefore the matrix U = 1 λ A is unitary and since A 1 = 1 λ AH, the automorphism Ad A can be written in the form Ad A (X) = 1 λ AH XA = 1 λ A H X 1 λ A = Ad U (X) for all X M n (C), thus Ad A is generated by a unitary matrix. 3. Since the standard basis of M n (C) can be written in the form E ij = e i e T j, where e i denotes the standard basis of C n, it follows that Ad A (E ij ) = (A 1 e i )(A H e j ) T = (A 1 A T )(e i e j ) Ad A = A 1 A T. It can be easily proven that both A 1 and A T are diagonalizable iff A is diagonalizable, and since tensor product is diagonalizable iff both is components are diagonalizable [10], the assertion is proven. It is easy to see that the standard basis (E ij ) n i,j=1 of M n(c) satisfies the following relation: { O for j k E ij E kl = for j = k E il and that an automorphism ψ preserves this relation. In addition, let {M 1, M 2,..., M k } be a finite set of matrices from M n (C) and let α 1, α 2..., α k C. Then since ψ is a bijection, we obtain ψ( k α i M i ) = ψ(o) = O k α i M i = O, hence ψ preserves linear independence, therefore the image of the standard basis constitutes another basis of M n (C), resulting in the subsequent definition: Definition 19. Any basis (B ij ) n i,j=1 of M n(c) satisfying { O for j k B ij B kl = for j = k B il shall be called a generalized standard basis of M n (C). Theorem 6 (Skolem-Noether theorem, specialized). All automorphisms of M n (C) are inner, i.e. for each automorphism ψ there exists an invertible matrix G such that Ad G = ψ. Proof. (As suggested in [11]) First, let (E ij ) n i,j=1 be a standard basis of M n(c) and denote ψ(e ij ) = F ij for all i, j n. It is clear that (F ij ) n i,j=1 is a generalized standard basis of M n(c). In addition, it is possible to write F 11 in the following form: F 11 = α ij E ij i,j=1 18

where there exists at least one ordered pair of indices, say p, q such that α pq 0. Second, we define: A = αpq 1 E 1p F 11 F = E i1 AF 1i B = F 11 E q1 G = F j1 BE 1j. The fact that both (E ij ) n i,j=1 and (F ij) n i,j=1 are generalized standard bases implies that AF 11 = A and BE 11 = B. To proceed, the three following identities are needed: AB = αpq 1 E 1p F 11 F 11 E q1 = αpq 1 E 1p F 11 E q1 = αpq 1 E 1p ( α ij E ij )E q1 = = α 1 pq ( j=1 j=1 i,j=1 α pj E 1j )E q1 = α 1 pq α pq E 11 = E 11 BA = F 11 E q1 α 1 pq E 1p F 11 = α 1 pq F 11 E qp F 11 = α 1 pq ( = α 1 pq ( α iq E ip )( = α 1 pq α pq k,l=1 α kl E kl ) = α 1 pq α il E il = i,l=1 α ij E ij )E qp ( i,j=1 i,k,l=1 α il E il = F 11 i,l=1 α kl E kl ) = k,l=1 α iq α kl E ip E kl = Furthermore, it is easily seen that n F ii = n ψ(e ii) = ψ( n E ii) = ψ(i) = I. Third, we prove that G is invertible and G 1 = F : GF = F j1 BE 1j E i1 AF 1i = F i1 BE 11 AF 1i = F i1 BAF 1i = F G = i,j=1 = F i1 F 11 F 1i = E i1 AF 1i F j1 BE 1j = i,j=1 = F ii = I E i1 AF 11 BE 1i = E i1 E 11 E 1i = E ii = I Fourth, we prove that for each i, j n, Ad G (E ij ) = F ij : GE ij G 1 = GE ij F = F k1 BE 1k E ij E 11 AF 1l = k,l=1 E i1 ABE 1i = F i1 BE 1j E l1 AF 1l = l=1 = F i1 BE 11 AF 1j = F i1 BAF 1j = F i1 F 11 F 1j = F ij Thus inner automorphism Ad G acts in exactly the same way on the standard basis (E ij ) n i,j=1 as ψ does, hence Ad G = ψ. 19

Chapter 3 Classification of fine gradings of M n (C) 3.1 Gradings and automorphisms of *-algebras In a non-specific *-algebra A, we define operations with its subsets in the following way: let A, B A, then αa + B = {αa + b : a A, b B}, AB = {ab : a A, b B}, A = {a : a A}, where α C. This notation allows us to define a grading in an elegant fashion [9], [10], [12]: Definition 20. Let A be a *-algebra and let I be an index set. A grading Γ of a *-algebra A is a decomposition of A into direct sum of subspaces Γ : A = i I A i such that for any pair of indices i, j I there exists an index k I with the property A i A j A k and for any index l I there exists an index m I such that A l A m. Definition 21. Let A be a *-algebra and let Γ be a grading of A. A grading Γ is called a refinement of Γ iff it satisfies that for each Ãi constituting Γ there exists A j constituting Γ such that Ãi A j, where at least one inclusion is proper. A grading which cannot be refined further is called fine. Certain gradings of a finite dimensional *-algebra A can be obtained by looking at the group of all its *-automorphisms. If a *-automorphism ψ is diagonalizable and a, b are its eigenvectors with nonzero eigenvalues µ, ν C respectively, then clearly ψ(ab) = ψ(a)ψ(b) = (µa)(νb) = (µν)ab and (ψ(a)) = (λa) = λa This means that ab is either an eigenvector of ψ with the eigenvalue µν or the zero element and that a is an eigenvector of ψ corresponding to λ. The given automorphism ψ therefore leads to a decomposition of A into the sum of eigenspaces of ψ with corresponding eigenvalues λ i, Γ : A = Eig(ψ, λ i ) λ i σ(ψ) 20

which satisfies the definition of a grading [9]. Refinements of a given grading can be obtained by adjoining further diagonalizable *-automorphisms commuting with ψ. Supppose that φ and ψ are commuting diagonalizable *-automorphisms, i.e. ψ φ = φ ψ. It follows that for any eigenvector a of ψ with the eigenvalue λ λφ(a) = φ(λa) = (φ ψ)(a) = (ψ φ)(a) = ψ(φ(a)) φ(a) Eig(ψ, λ) and so φ is Eig(ψ, λ)-invariant. Diagonalizability of φ (according to Lemma 2) implies that φ is diagonalizable on Eig(ψ, λ) for each λ σ(ψ) and therefore defines a refinement of Γ. Moreover, assuming that ψ is invertible, i.e. 0 / σ(ψ), we obtain ψ(a) = λa ψ 1 (a) = 1 λ a thus ψ 1 has the same eigenspaces as ψ only corresponding to the inverses of their respective eigenvalues. It can be easily proven that ψ 1 preserves involution and multiplication and therefore is a *-automorphism: x = ((ψ ψ 1 )(x)) = ψ(ψ 1 (x)) = ψ(ψ 1 (x) ) ψ 1 (x ) = (ψ 1 (x)) ψ(ψ 1 (x)ψ 1 (y)) = [(ψ ψ 1 )(x)][(ψ ψ 1 )(y)] = xy ψ 1 (x)ψ 1 (y) = ψ 1 (xy) for all x, y A. The above observations imply that a pair consisting of *-automorphism and its inverse defines the same grading and therefore a given grading Γ and its refinements are induced by a group G of invertible diagonalizable *-automorphisms. If Γ is fine, then G must be maximal, i.e. for all ψ / G there exists some φ G such that ψφ φψ. Maximal groups of commuting diagonalizable invertible *-automorphisms shall be called MAD-groups of a *-algebra A [9]. 3.2 Classification of MAD-groups of M n (C) Let us consider a *-automorphism ψ of M n (C). According to Theorem 6, it is an inner automorphism. It is clear that if for some unitary matrices U, V, the following implication holds: U = V Ad U = Ad V. Assuming the converse, i.e. Ad U = Ad V gives, according to Theorem 1, UV 1 = αi, where α n = 1. Hence an *-automorphism defines an equivalence relation U V U = αv, α n = 1 on the group of unitary matrices, which shall be denoted U(n). By defining multiplication of equivalence classes [U][V ] = [UV ], a group isomorphic to the group of all *-automorphisms of M n (C) is obtained. According to Theorem 5 and Theorem 6, all *-automorphisms of M n (C) are diagonalizable, invertible and inner. We show that there is a one-to-one correspondence between MAD-groups and unitary Ad-groups [10], defined below: Definition 22. A subgroup G of U(n) shall be called a unitary Ad-group iff 1. For any pair U, V G there exists ω n C such that UV = ω n V U. 2. G is maximal, i.e. for each M / G there exists U G such that UM ω n MU. Obviously, if any unitary Ad-group contains a matrix U, it also contains the whole equivalence class [U]. Denote for any MAD-group G: G Ad = {U U(n) : Ad U G} 21

and conversely for any unitary Ad-group G : Ad(G ) = {Ad U : U G }. According to Theorem 5, the first property of Definition 22 is satisfied for any pair Ad U, Ad V G. It is easy to see that G Ad is maximal and that Ad(G Ad ) = G. We are interested in the classes of MAD-groups, in what follows we will describe their suitable representatives. If a unitary Ad-group G is commutative, i.e. UV = V U for all UV G, then all its elements are simultaneously diagonalizable by some unitary matrix. In addition, maximality of G implies that it is conjugated to the group of all n n diagonal unitary matrices, denoted U D (n). The following lemma describes the case UV = ω n V U [10]. Lemma 5. Let A, B be diagonalizable invertible elements of M n (C) such that AB = ω k BA where ω k = exp ((2πi)/k) and k divides n. Then there exists a invertible matrix P such that and P AP 1 = diag(1, ω k, ωk 2,..., ωk 1 k ) diag(d 1, d 2,..., d n/k ) 0 1 0... 0 0 0 0 1... 0 0 P BP 1 0 0 0... 0 0 =...... diag(δ 1, δ 2,..., δ.. n/k ) 0 0 0... 0 1 1 0 0... 0 0 where arg d i, arg δ i [0, 2π/k) for all i = 1, 2, 3,..., n/k. Proof. First, order the eigenvalues λ i of A with respect to arg λ i, so that 0 arg λ 1 arg λ 2... arg λ n 2π. Consider the subspace F C n of eigenvectors with arg λ i [0, 2π/k) and denote dim F = s. As AB k = ABB k 1 = ωbab k 1 = ω 2 B 2 AB k 2 =... = ω k B k A = B k A, by applying lemma 2, a basis {e 1, e 2,..., e s } F consisting of common eigenvectors of A and B k can be chosen, i.e. Ae i = λ i e i B k e i = ν k i e i, where geometric multiplicity greater than 1 is allowed and where we may assume arg ν i [0, 2π/k). Let us define f 1 = e 1 f 2 = 1 Be 1, f 3 = 1 ν 1 ν1 2 B 2 e 1... f k = 1 ν k 1 1 f k+1 = e 2 f k+2 = 1 Be 2 f k+3 = 1 ν 2 ν2 2 B 2 e 2... f 2k = 1. B k 1 e 1 ν k 1 2 B k 1 e 2 f (s 1)k+1 = e s f (s 1)k+2 = 1 Be s f ν (s 1)k+3 = 1 s νs 2 B 2 e s... f sk = 1 νs k 1 B k 1 e s Obviously, Af 1 = Ae 1 = λ 1 f 1, Af 2 = A( 1 ν 1 Be 1 ) = qλ 1 f 2, Af 3 = a( 1 B 2 e ν1 2 1 ) = q 2 λ 1 f 3, which implies that f 1, f 2,..., f sk are eigenvectors of A, each corresponds to a different eigenvalue, therefore these vectors are linearly independent. 22

Suppose sk < n. Then there exists an eigenvector x of A, linearly independent on f 1, f 2,..., f sk. Let x correspond to λ σ(a) and arg λ [(2π/k)j, 2π/k(j + 1)). This assumption implies that A 1 B j x = ω j B j A 1 x = ω j 1 λ B j x, so B j x F. Thus B j x can be written as a linear combination of e 1,..., e s and by applying B j on both sides of this equation, we see that x can be written as a linear combination of B j e 1,..., B j e s, which is a contradiction, so sk = n and (f i ) n forms a basis of C n. We have obtained for some suitable P : P 1 AP = diag(λ 1, ωλ 1,..., ω k 1 λ 1, λ 2, ωλ 2,..., ω k 1 λ 2,..., λ n/k, ωλ n/k,..., ω k 1 λ n/k ) = Moreover, = diag(1, ω,..., ω k 1 ) diag(λ 1, λ 2,..., λ n/k ) Bf 1 = Be 1 = ν 1 f 2 Bf 2 = 1 ν 1 B 2 e 1 = ν 1 f 3 Bf 3 = 1 ν 2 1 B 3 e 1 = ν 1 f 4...... Bf k = 1 ν k 1 1 B k e 1 = νk 1 ν k 1 1 e 1 = ν 1 f 1 and similarly for other eigenvalues of B k, giving the result: 0 1 0... 0 0 0 0 1... 0 0 P 1 0 0 0... 0 0 BP =...... diag(ν 1, ν 2,..., ν.. n/k ) 0 0 0... 0 1 1 0 0... 0 0 Also note that in the statement of lemma 5, it is possible to replace the interval [0, 2π/k) by the interval ( π/k, π/k]. Definition 23. The k k matrix 0 1 0... 0 0 0 0 1... 0 0 0 0 0... 0 0........ 0 0 0... 0 1 1 0 0... 0 0 will be denoted P k and the matrix diag(1, ω, ω 2,..., ω k 1 ), where ω k = exp(2πi/k) will be denoted Q k. The group P k = {ωk l Qm k P k n : l, m, n = 0, 1, 2,..., k 1} will be called Pauli s group. It can be easily checked that P k is a group of unitary matrices satisfying P k k = Qk k = I k P k Q k = ω k Q k P k. Furthermore, if k is even, then (P k Q k ) k = I and if k is odd, then (P k Q k ) k = I. These matrices were first studied by Weyl in [13]. 23

Lemma 6. P k P m is conjugated to P km iff k and m are relatively prime [14]. Proof. First, let us put C k = span((e (1) i ) k 1 i=0 ) and Cm = span((e (2) j ) m 1 j=0 ), where (e(1) i ) m 1 j=0 denote standard bases of their respective spaces. The elements of Ckm can be written (e (2) j in the form k 1 i=0 m 1 j=0 α ij (e (1) i e (1) j ) where α ij C for all i, j n. ) k 1 i=0 and Now, it can be easily seen that f s = Pkm s (e(1) 0 e (2) 0 ), s = 0, 1, 2,..., km 1 runs just once through all the vectors e (1) i e (2) j iff the following sets are the same: { ( ) } { ( ) } s mod k i : s = 0, 1, 2,..., km 1 = : i = 0, 1, 2,..., k 1; j = 0, 1, 2,..., m 1. s mod m j Obviously, the inclusion holds. Hence, it is sufficient to show that the equality ( ) ( ) ( ) ( ) s mod k t mod k s t mod k 0 = i.e. = s mod m t mod m s t mod m 0 implies that s = t. Indeed, both k and m are divisible by s t and s t {0, 1, 2,..., km 1}. If k and m have no common divisors, then their lowest common multiple is km, hence s t = 0. Therefore there exists a unique pair p, q such that P km f s = f s 1 mod km = e (1) p 1 mod k e (2) q 1 mod m = (P ke (1) p ) (P m e (2) q ) = (P k P m )(e (1) p e (2) q ). Similarly, Q km f s = ωkm s f s = ω p k ωq m(e (1) p e (2) q ) = (ω p k e(1) p ) (ωme q (2) q ) = (ω p k (e(1) p ) (ωme q (2) q ) = (Q p k e(1) p ) (Q q me (2) q ) = (Q k Q m )(e (1) p e (2) q ). Which implies that both bases contain the same vectors in different order, therefore any A km P km can be unitarily conjugated to the tensor product A k A m where A k P k and A m P m. Now, let A, B M n (C) and let ω n = exp(2πi/n). The set {C M n (C) : ( s, t Z)(AC = ω s nca, BC = ω t ncb)} shall be called ω n -commutant of matrices A and B and will be denoted {A, B} (ωn). Obviously, ω n n = 1. For ω n = 1, we denote {A, B} (1) = {A, B} and call the set commutant of matrices A and B. In other words, {A, B} is the set of matrices commuting with both A and B. Lemma 7. Let D 1 = diag(d 1, d 2,..., d k ) and D 2 = diag(δ 1, δ 2,..., δ k ) where arg d i, arg δ i [0, 2π/k) for all i ˆn. Put A = Q k D 1 and B = P k D 2. Then {A, B} (ω k) = P k {D 1, D 2 } [10]. Proof. Let us first consider C {A, B} {A, B} (ωk). A is a diagonal matrix with d i ωk s on the diagonal, ω k = exp(2π/k). Since arg d i [0, 2π/k), we have d i ωk s d iωk t for s t and so AC = CA C = k j=1 C j, where C j M n (C) is invertible and for each j it holds From the equality BC = CB we obtain: C j D 1 = D 1 C j (3.1) C j D 2 = D 2 C j (3.2) 24

for all i ˆn, we put C k+1 = C k, thus C 1 D k 2 = Dk 2 C 1, for matrix elements of C 1, denoted γ ij, the following equation is obtained: γ ij δ k j = δ k i γ ij (3.3) for each i, j. If γ ij 0, then δ k j = δk i δ j = δ i, i.e. C 1 D k 2 = D k 2C 1 C 1 D 2 = D 2 C 1 (3.4) Moreover, C 1 D 2 = D 2 C 1 C 1 = C 2 and analogously C 1 = C 2 =... = C k. Therefore C = I k C 1, where C 1 {D 1, D 2 }. Now, let us consider H {A, B} (ωk), i.e. HA = ωk sah and HB = ωt kbh. Put C = A x B y H, where x, y Z. Then CA = (A x B y H)A = ωk s ωy k A(Ax B y H) = ω s+y k AC CB = (A x B y H)B = ωk t ωx k B(Ax B y H) = ω t+x k BC For y = s and x = t is C {A, B} and therefore C = A s B t H = I k C 1, C 1 {D 1, D 2 }, leading to H = Q s k P t k Ds 1 Dt 2 C 1, it is obvious that D s 1 Dt 2 C 1 {D 1, D 2 }. Now we show that the previous lemmas imply that noncommutative unitary Ad-groups are conjugated to other unitary Ad-groups acting on smaller dimension. Lemma 8. A noncommutative subgroup G of U(n) is a unitary Ad-group iff it is unitarily conjugated to the tensor product P n/so G for some divisor s 0 of n and some unitary Ad-subgroup G U(s 0 ) [10]. Proof. Let G be a noncommutative unitary Ad-group, i.e. every pair U, V G satisfies UV = ω s(u,v ) n V U, where ω n = e 2πi n, S(U, V ) = 0, 1, 2,..., n 1. Denote s 0 = min{s(u, V ) > 0 : U, V G } and choose U 0, V 0 for which s(u 0, V 0 ) = s 0. Since U0 kv 0 l G Ad for all k, l N 0, the set Z(G ) = {(ω s 0 n ) k I n : k N 0 } forms a group isomorphic to the subgroup of the cyclic group Z n = {ωn k : k N 0 } and therefore s divides n. If s 0 = 1 then any W G lies also in {U 0, V 0 } (ωn). We show that it is also true for s 0 > 0. Suppose the contrary, i.e. the exists W G such that Then ks 0 < s(w, U 0 ) < (k + 1)s 0 for some k = 1, 2, 3,..., n/s 0. 0 < ks 0 + s(w, U 0 ) = s(v n/s 0 k 0 W, U 0 ) < s 0, which is a contradiction to the minimality of s 0 because V n/s 0 k 0 W G. Therefore s(w, U 0 ) = ks 0 and analogously s(w, V 0 ) = ls 0 for k, l = 0, 1, 2,..., n/s 0 1. Thus any W G lies in {U 0, V 0 } (ωs 0 n ), i.e. G {U 0, V 0 } (ωs 0 n ). Using Lemma 5 we may assume that there exists a unitary matrix A U(n) such that and using Lemma 7 we obtain A H U 0 A = Q n/s0 D 1, A H V 0 A = P n/s0 D 2 G {U 0, V 0 } (ωs 0 n ) = P n/s0 {D 1, D 2 }, 25