Loop Quantum Gravity
Loop Quantum Gravity
In what follows we sketch the present status of Loop Quantum Gravity (LQG) in more technical terms and then describe current research areas carried out in Erlangen.
Classical Formulation
As was already mentioned in the general introduction, it is possible to cast General Relativity (GR) in 4D into the form of a gauge theory with structure group \mathrm{Spin}(1,3) and additional gauge symmetries resulting from the space-time diffeomorphism invariance of the theory. In the Hamiltonian framework, one exploits the global hyperbolicity of the space-time metrics to be considered on a given differential manifold M which is possible only if the manifold admits a foliation into spacelike hypersurfaces of fixed topology and differential structure which one conveniently describes as a family of embeddings of a fixed spatial manifold S. Upon partly fixing the the \mathrm{Spin}(1,3) invariance one obtains a phase space description of the classical theory in terms of connections A of a principal \mathrm{SU}(2) bundle over S together with sections E of an associated, by the adjoint representation, vector bundle over S whose pull back to S are Lie algebra valued pseudo two forms. Physically one can think of E as encoding information about the 3D geometry on S and of A as encoding information about the extrinsic curvature of S when embedded into M. Accordingly, A necessarily contains time derivatives of E along the leaves of the foliation and one can therefore roughly imagine A as the velocity of E.
Upon Legendre transform, (A,E) become a canonical pair with standard equal time Poisson brackets proportional to Newton’s constant N. On this phase space three types of constraints act by their Hamiltonian flow: 1. The \mathrm{SU}(2) Gauss constraint C, 2. the spatial diffeomorphism constraint D and 3. the Hamiltonian constraint H. The \mathrm{SU}(2) Gauss constraint generates via its Hamiltonian flow the \mathrm{SU}(2) gauge transformations (changes of the trivialisations of the principal bundle) that one expects in any \mathrm{SU}(2) gauge theory. The other two constraints are simply the temporal-spatial and temporal-spatial projections of Einstein’s equations. They do not contain second order time derivatives are therefore not evolution equations but rather constraints on the initial data of the solutions to the evolution equations which are contained in the remaining spatial-spatial projections of Einstein’s equations. The evolution equations are in fact contained in the Hamiltonian flow of a linear combination sD+lH of both constraints in the form of Hamiltonian evolution equations for A and E. The coefficients s and l are called respectively the shift vector field and the lapse scalar on S. These two tensor fields contain the information about the foliation that was used to embed S while all the other structures A, E, C, D, H are embedding independent. Since the Palatini action does not prefer any embedding over another due to its space-time diffeomorphism invariance, the whole Hamiltonian formulation must in fact be independent of s, l which enforces D, H to be constraints. In particular, sD+lH is not a Hamiltonian which generates time evolution but rather it generates spacetime diffeomorphisms which in GR is considered as a gauge symmetry (invariance under coordinate transformations).
It transpires that in GR there is no canonical Hamiltonian that generates time evolution and that observable quantities (i.e. gauge invariant functions on the phase space) have trivial flow with respect to sD+lH for any s, l. This is a situation unfamiliar from other field theories on Minkowski space (or any other background space-time) but it is in fact a generic feature of any generally covariant theory. This fact is sometimes referred to as the problem of time. Physically it means that in GR one is forced to carefully think about the very meaning of time evolution. Roughly speaking, time evolution is the relation between changes of quantities with respect to each other. Thus one chooses a system of rods and clocks everywhere on M and performs measurements with respect to those. On Minkowski space we implicitly assume that a system of such inertial material reference system is globally available, that changes covariantly under Poincare transformations and we consider Minkowski space as filled with these test observers. In GR we cannot do that. A test observer is a mathematical idealisation that is physically invalid because any rod and clock comes with a contribution, whatever tiny, to the stress energy tensor. It transpires that the material reference system is actually coupled gravitationally and this is what makes it possible to reintroduce a notion of time evolution also in GR with general covariance intact. The notion of time evolution and the resulting physical (gauge invariant and non-vanishing) Hamiltonian of course depend on the chosen material reference system.
Quantisation
When quantising a classical theory, one can follow two major routes: Canonical and path integral quantisation. In LQG both approaches are intensively studied.
Canonical Quantisation
In the canonical approach, one quantises the Poisson brackets of the A, E variables and promotes them to equal time commutators. More precisely, one picks a sub *-algebra of the Poisson algebra of functions on the phase space which separates its points and promotes it to an abstract *-algebra B. Roughly speaking that means to replace Poisson brackets by commutators times i\hbar such as [A,E]=i\hbar N 1 and to promote reality conditions to *-involution relations. Here \hbar is Planck’s constant, i is the imaginary unit and 1 is the unit of the abstract *-algebra B. The combination \hbar N is the Planck area of 10{-66} cm^2 and this is how the Planck length enters the stage. In a second step, one studies representations R of B as concrete operators on Hilbert spaces V. It is well known that in QFT the representation problem is very rich and that generically one has an uncountably infinite number of unitarily inequivalent representations available. In order to pick the interesting ones, physical input is required. The physical input usually comes in the form of the desire to implement gauge transformations, symmetry transformations or time evolution as unitary transformations. Sometimes one also imposes regularity or continuity requirements.
In LQG one uses an algebra B that is generated by functions f of holonomies A(e) of the connection along one dimensional paths e and by the Weyl operators built from the flux E(F) of of E through two dimensional surfaces F. Roughly speaking one can think of these as the (path ordered) exponentiated magnetic flux of A (if e is a closed loop, use the non-Abelian Stokes theorem to write the line intergral as a surface integral through a surface F bounded by e) and the exponentiated electric flux of E. The functions f depend on an arbitrary but finite number of these holonomies and thus a finite number of paths e. Without loss of generality one can consider paths that only intersect in their end points called vertices. The union of these paths are thus oriented graphs \gamma embedded into S and arise by gluing a generating set of loops. This is how the name Loop Quantum Gravity was created. This choice of B is well motivated by similar considerations in lattice gauge theory but in contrast to lattice gauge theory one does not work with a fixed graph but with all graphs and all surfaces, this is a continuum theory and no discrete approximation. Notice that nowhere a background metric was used in the construction of B.
To pick a (cyclic) representation (R,V) one can now impose the requirement that spatial diffeomorphisms of S are implemented unitarily. One way to gruarantee this is to pick a diffeomorphism invariant algebraic (i.e. expectation value) state and to construct the corresponding Gel’fand Naimark Segal (GNS) representation. It has turned out that there is exactly one such representation, hence one has an important uniqueness result which adds to the predictive power of LQG. In this representation, the functions f described above become both wave functions and bounded multiplication operators and the E(F) become self adjoint derivation operators. The scalar product between wave functions defined on graphs \gamma, \gamma' uses the product Haar measure on n copies of \mathrm{SU}(2) where n is the number of edges in the graph given by the union of the graphs \gamma, \gamma'. The Hamiltonian flow of both the Gauss constraint G and the diffeomorphism constraint can be implemented unitarily in this representation. The corresponding Hilbert space V carries an interesting orthonormal basis T which is labelled by a graph \gamma, a set j of half integral spin quantum numbers (one for each edge e of \gamma) and a set i of \mathrm{SU}(2) invariant intertwiners (one for each vertex v of \gamma). The underlying reason for these labels is harmonic analysis on \mathrm{SU}(2), in particular the Peter & Weyl theorem which says that the matrix element functions on compact groups form an orthonormal basis in the Hilbert space of square integrable functions on the group with respect to the Haar measure, that irreducible representations of \mathrm{SU}(2) are labelled by spin quantum numbers and that intertwiners are maps between representation spaces.
The Hilbert space carries both area operators \mathrm{Ar}[F] for surfaces F and volume operators \mathrm{Vol}[R] for regions R. They are essentially diagonalised by the so called spin network functions T just described when choosing F and R as follows: Consider a polyhedronal partition P of S into polyhedra R and faces F. Pick an interior point v in each R and connect polyhedra R that share a face F by an edge e intersecting F transversally and ending in the respective interior points. The resulting network of edges e and vertices v forms a graph \gamma dual to P. If we colour edges and vertices with spins and intertwines we obtain a spin network function. The physical interpretation of j and i respectively is now that on T the operator \mathrm{Ar}[R] has eigenvalue \hbar N times the spin quantum number labeling the edge intersecting F and the operator \mathrm{Vol}[R] has an eigenvalue that is proportional to the Planck volume and a more complicated expression that depends on the intertwiner labeling the vertex in R and the spins of the edges that are adjacent to v. The important feature of both operators is that their spectrum is pure point (discrete) indicating a granular structure of space at the Planck scale. Notice again that neither F or R have a prescribed area and volume, these are quantum numbers depending on the state on which one probes the corresponding operator. The whole framework is background independent. Area and volume operators are hopelessly ill defined in background dependent Hilbert spaces such as Fock spaces.
The presentation so far was entirely kinematical and we have not yet incorporated any matter. As far as matter is concerned, one can proceed quite similarly as for the gravitational degrees of freedom: The gauge bosons of the standard model are located along the paths of the graph of spin network functions while the fermions and the Higgs field are located in its vertices. As far as the implementation of dynamics is concerned this consists of two steps: A. Implementation of the Constraints and B. Constructing gauge invariant observables and a physical notion of time. Here we can follow two different strategies: Implementation of the constraints before or after quantization, called reduced phase space and Dirac quantization respectively. In the reduced phase space approach one chooses a material reference system and explicitly constructs a Poisson algebra of gauge invariant observables together with a physical Hamiltonian h and solves the constraints still within the classical theory, then builds from it the analog of the algebra B described above and finally looks for a representation of it in which h can be implemented as a self-adjoint operator. In the Dirac approach one does not care about observables to begin with and constructs the spin network Hilbert space V on which one implements the operators D and H. The spin network Hilbert space, however, this is not the physical Hilbert space which is rather the joint kernel of D, H. The joint kernel is typically not a subspace of V and thus must be equipped with a new Hilbert space inner product. A basic technique for doing that uses the methods of rigged Hilbert spaces and the direct integral decomposition. The advantage of the reduced approach is that the classical solution of the constraints is usually simpler than the quantized one, on the other hand it maybe hard to construct a representation of the observables. The advantage of the Dirac quantization method is that representations of the kinematical algebra are usually not difficult to find but the construction of the joint kernel and the algebra of observables thereon is a non-trivial task. In LQG one therefore works in both directions. In our concrete situation, at least concrete densely defined and closable quantizations of the operator H can be given while for D one works actually with its finite unitary transformations.
Path Integral Quantisation
In ordinary QFT a path integral is usually nothing else than an integral over fields which is supposed to compute the S matrix elements, that is, roughly speaking the matrix elements of the exponentiated Hamiltonian h in the limit of a sufficiently long time lapse between ingoing and outgoing free particles. In GR there is no canonical Hamiltonian. Thus either one chooses a material reference system in order to equip GR with a notion of time in order to get to the notion of an S matrix (reduced phase space approach) or one uses path integral techniques in order to construct the joint kernel by rigging methods. To date it is mainly the second idea that one tries to implement. Thus, rather than constructing the matrix elements of the exponential of D or H, one constructs the matrix elements of the delta distribution \delta(H) (and similarly for D). This formally projects to the joint kernel since obviously H~\delta(H)=0. The physical inner product between two solutions is then formally given by the matrix element \left\langle \delta(H) T, \delta(H) T' \right\rangle := \left\langle T, \delta(H) T' \right\rangle, where on the right hand side we use the kinematical inner product and T, T' are spin network functions. Since the delta distribution is formally the functional integral over l of \exp (i l H), we get a formula for the projector very close to the situation in usual QFT just that one time parameter has been replaced by a multi fingered time over which one integrates functionally in addition. This basic idea is difficult to make precise mathematically and momentarily only exists in a regularized version where the continuous manifold M is replaced by a sum over discretised spacetime manifolds (and more singular objects). The sum is over all coloured 2-complexes in 4 dimensions whose boundary graphs, spins and intertwiners coincide with the graphs underlying the spin network functions T, T'. Here, a coloured 2-complex carries spins on the faces and intertwiners on the edges, and can be considered as a discrete time evolution of a spin network function. For a fixed 2-complex K the sum over spins and intertwiners is called a spin foam SF over K, and when performing the sum over those K that are compatible with the boundary graphs \gamma and \gamma' of T and T' respectively i.e. K has to be a cobordism), the resulting object can be recognized as a specific group field theory (GFT) transition amplitude. These are generalizations of matrix models invented in statistical physics.
Applications
So far LQG has been applied to mainly two regimes: black holes and cosmology.
In the black hole regime there is a semiclassical framework coined isolated horizons that involves a classical piece and a quantum piece. The quantum piece is easily understood: Given a 2-surface F of given classical area \mathrm{Ar}[F], compute the number \mathrm{n}[F] of spin network functions T whose area eigenvalue lies in the interval [\mathrm{Ar}[F]-hN, \mathrm{Ar}[F]+hN]. Then define S[F]:= \ln(\mathrm{n}[F]). We could call this the entropy of the surface F as computed in the microcanonical ensemble defined by the shells of constant area (rather than energy). It is easy to see that this number is infinite for a random surface F. In order to obtain a finite result to date, one must feed in the classical information that F is not some arbitrary surface but rather a closed surface of an event horizon of a black hole at least temporarily in equilibrium. When doing that, one interprets F as an information barrier and the entropy as entanglement entropy that measures our disinformation about what happens in the interior of the black hole. One recognizes many spin and intertwiner configurations as equivalent because they are bulk data, and many of them give rise to the same surface data which are accounted to the entropy of the black hole and can be formulated in terms of an \mathrm{SU}(2) or \mathrm{U}(1) Chern-Simons theory. Also the constraints H, D are used in order to enlarge the corresponding equivalence classes of data. The result of the computation is then finite and the leading term in S[F] is proportional to \mathrm{Ar}[4]/(4hN) which is the celebrated Bekenstein-Hawking postulate based on computations of QFT in classical black hole spacetimes. The LQG computation explains the microscopic origin of the entropy as the number of possibilities to form an event horizon area with spin network functions modulo identifications.
In the cosmosgical regime one makes use of the fact that when artificially freezing the inhomogeneous degrees of freedom of GR, a self-consistent sector of cosmological solutions of Einstein’s equations results with a finite number of degrees of freedom. The resulting model can then be quantized using the Hilbert space techniques developed for full LQG which deviate from those of the Schrödinger representation. The results are quite spectacular: The truncated model predicts a big bounce rather than a big bang, there was never an origin of time. In order to emphasize the fact that these are predictions obtained in a truncated model rather than full LQG this line of research was coined Loop Quantum Cosmology (LQC).
Open Research Problems
Our exposition has revealed that LQG is not at all a closed subject but rather a still fast evolving field very much under construction. As should have transpired, the least understood property of the theory is its dynamics, that is the Quantum Einstein Equations, which touch on literally every research direction. Below we give a by far not exhaustive list of open research problems that researchers of our team are working on themselves or at least are strongly interested in:
Spatial Diffeomorphism and Hamiltonian Constraint
Despite the fact that for more than a decade a Quantum Hamiltonian Constraint operator H is available which is densely defined, closable, has no apparent anomalies (the classical algebra of the H constraints, which closes to a D constraint, indeed annihilates the kernel of D) and for which an infinite number of exact solutions are known, the situation is not entirely satisfactory because the constraint operator suffers from quantization (e.g. operator ordering) ambiguities which come from the non-polynomial algebraic form of H. In order to fix these ambiguities it would be desirable to exactly reproduce the quantum version of the classical Poisson bracket calculation. Likewise, while all solutions to the D constraint are explicitly known, their inner product suffers from an ambiguity which needs to be fixed and which stems from the infinite volume of the spatial diffeomorphism group which leaves its trace when dividing it out. The associated computations are very hard because the entire approach is non-perturbative and thus non-perturbative approximation schemes must be developed.
Semiclassical Tools
Semiclassical, even coherent states have been developed for LQG. These minimize the fluctuations of the A, E variables around classical expectation values and can be used to design non-perturbative approximation schemes. However, these states are not adapted to the dynamics of LQG, that is, neither to the Hamiltonian constraint H nor to any physical Hamiltonian, in the sense that the quantum evolution of a coherent state does not necessarily stay close to the classical trajectory. While this is hardly surprising given the fact that not even for anharmonic oscillator stable coherent states are known, in GR, such states are needed because the universe evolves extremely accurately on the classical trajectory for the forseeable future and probably has done so since the first second after the point of time called big bang. Accordingly, the semiclassical toolbox needs to be improved.
Convergence of SF and Relating Canonical and Covariant LQG
Little is known about the convergence of SF and of the GFT sums. In the former case one can use quantum groups as regulators (which make the number of irreducible representations to be considered finite), in the latter case, one could imagine to use renormalisation group techniques in order to control the sum over K.
To date the precise connection between the covariant and canonical formulation remains to be understood. The reason is that one does not really define a regularized version of \delta(H) on K but rather makes a heuristic path integral ansatz based on the Plebanski formulation of GR. In this version of the action principle, one makes use of the fact that the Palatini action is classically equivalent to the BF action when equipped with an additional simplicity constraint, namely that the B-field is composed out of a Vierbein (Tetrad). Since one can implement the simplicity constraint by a Lagrange multiplier, the basic idea is to consider the Palatini path integral as a BF theory path integral with the simplicity constraint imposed when integrating over the Lagrange multiplier. It is now convenient that BF is a topological Quantum Field Theory (TQFT) with a finite number of physical degrees of freedom whose 2-complex regularization is in fact exact! Since a lot is known about TQFT, the idea is to make use of this fact when imposing the simplicity constraints which break the huge number of symmetries of BF theory and give rise to the propagating degrees of freedom of GR. In a certain sense, GR is a perturbation of BF theory. Unfortunately, it is the precise implementation of the simplicity constraints that remains a major challenge still today. Another challenge is to couple matter because matter action terms use the Vierbein and not the B-field of BF theory. It is also unknown whether the spin foam formula defines the projector \delta(H) for the given Hamiltonian constraint of the canonical theory which would be the key idea for linking both approaches.
Deparametrised Models and Material Reference Systems
As mentioned above one can circumvent the whole set of complications associated with implementing D, H by classically passing to the algebra of observables as defined by a material reference system. Remarkably, this even works on the quantum level when using suitable matter such as comoving dust fields. Of course, it would be far more convincing if one could use a more realistic matter species already contained in the (supersymmetric extension of the) standard model, for instance a dark matter candidate. Accordingly, different options must be tested.
Quantum Cosmology
It would be important to show that the LQC results are also valid when taking the inhomogeneous quantum degrees of freedom of full LQG into account. To do this, one would like to understand better in what sense LQC can be identified as as a sector of LQG. On the other hand, one can try to make contact to cosmological perturbation theory which is the most important tool in order to interpret the WMAP or PLANCK observations theoretically. A major challenge is here how to go beyond first order perturbation theory since the notion of gauge invariance changes order by order in in perturbation theory and how to quantize the second order theory which in contrast to the linear order theory is interacting.
Hawking Radiation and Black Holes
Hawking’s celebrated result that black holes radiate like a black body with a temperature T given by k_BT=hc/R, where R=2NM/c^2 is the Schwarzschild event horizon radius of a black hole of Mass M, k_B is Boltzmann’s constant and c is the speed of light, uses the framework of QFT on curved (in this case Schwarzschild) spacetimes. That is, geometry stays classical, matter is quantum and geometry-matter interactions are neglected. This can be at best an approximation because of gravitational redshift: A photon of frequency k_BT/h observed at infinity when created very close to the horizon R had a frequency arbitrarily close to the Planck frequency where QG effects should be taken into account as well. Even more problematic is the fact that black holes will ultimately explode due to Hawking radiation. As long as the black hole exists, it has an entanglement entropy just because outside observers have no access to physics within R but in principle the missing information still sits within R and the total entropy vanishes. However, when the black hole explodes all information is lost and total entropy is created. It is unclear how this is compatible with unitarity because black hole explosions would turn pure states into mixed states. One would therefore like to repeat Hawking’s computation while taking QG effects into account and thereby find the resolution of these puzzles. A first step would be to turn the partly classical characterization of black holes in temporary equilibrium into a pure quantum condition.
QFT on Curved Spacetimes
Quite in general one would like to better understand how Minkowski space (or other spacetimes) and the standard matter model thereon turn out to be very good approximations to QG when geometry fluctuations and geometry-matter interactions are negligible. An Ansatz similar to the Born-Oppenheimer approximation for systems with fast and slow degrees of freedom (such as electrons and nuclei) comes to mind where matter and geometry respectively play the roles of the fast and slow degrees of freedom. Closely connected to this is the question how one can recover physical screening effects of ordinary QFT such as running (i.e. energy dependent) coupling constants. These effects are usually obtained using perturbation theory to which non-perturbative QFT has no access. A more promising approach would be to use Wilsonian renormalisation group ideas and to recover screening effects from the effective action. Quite in general renormalisation group ideas must be developed for LQG.
Higher dimensions and Supergravity
To date there is no experimental evidence for either supersymmetry or higher dimensions. However, both are fascinating ideas and if realized in nature one should not quantize the 4D Palatini or Einstein-Hilbert action but rather one of the higher dimensional supergravity (SUGRA) actions. Those in 10 and 11 dimensions are particularly interesting because they are supposed to be the low energy effective actions of String Theory and M-Theory respectively and thus provide a possible interface between these theories and LQG. The non-perturbative methods of LQG have turned out to be extendable to most SUGRA theories in 3, 4 and higher dimensions but the research in this direction is very recent and many tasks still have to be carried out.
Computational Physics
Since the foundations of the theory are not yet fully developed, there is a hesitation against using numerical methods in order to compute, for instance, exact solutions to the Hamiltonian constraint. However, there are versions of the theory which should hold in certain limits in which numerical methods can be used in principle quite effectively because one can exploit the closeness of LQG to a lattice gauge theory for which powerful routines are available. For instance, using such methods one may be able to have access to S-matrix calculations and thus ultimately may be able to connect to the language of Feynman diagrams.
Mathematical Physics
It is important to constantly and critically reexamine the assumptions that went into the results obtained so far. There are many stages in the mathematical foundations of the theory where one had to make choices guided by physical intuition but which were not strictly imposed by mathematical consistency or experimental input. We therefore keep a critical attitude and encourage researchers to explore alternatives. An example is a recent development suggested by the spin foam formulation which is to use methods from category theory and higher gauge theory in order to better understand the connection between TQFT and GR at the quantum level. Such inquiries sometimes lead to mathematical spin offs such as the connection between black holes, Chern-Simons theory, lower dimensional Quantum Gravity, Knot Theory and Conformal Field Theory (CFT) which of course are interesting in their own right.