# Physics as a Generalized Number Theory

Note: Newest contributions are at the top!

 Year 2012

### Does the square root of p-adic thermodynamics make sense?

In zero energy ontology M-matrix is in a well-defined sense "complex" square root of density matrix reducing to a product of Hermitian square root of density matrix multiplied by unitary S-matrix. A natural guess is that p-adic thermodynamics possesses this kind of square root or better to say: is modulus squared for it.

For fermions the value of p-adic temperature is however T=1 and thus minimal. It is not possible to construct real square root by simply taking the square root of thermodynamical probabilities for various conformal weights. One manner to solve the problem is to assume that one has quadratic algebraic extension of p-adic numbers in which the p-adic prime splits as p= ππ*, π= m+(-k)1/2n. For k=1 primes p mod 4=1 allow a representation as product of Gaussian prime and its conjugate.

For primes p mod 4=3 Gaussian primes do not help. Mersenne primes rerpesent an important examples of these primes. Eisenstein primes provide the simplest extension of rationals splitting Mersenne primes. For Eisenstein primes one has k=3 and all ordinary primes satisfying either p=3 or p mod 3=1 (true for Mersenne primes) allows this splitting. For the square root of p-adic thermodynamics the complex square roots of probabilities would be given by π(L0/T)/Z1/2, and the moduli squared would give thermodynamical probabilities as p(L0/T)/Z. Here Z is the partition function.

An interesting question is whether T=1 for fermions means that complex square of thermodynamics is indeed complex and whether T=2 for bosons means that the square root is actually real.

For background see the chapter Physics as Generalized Number Theory: p-Adicization Program.

### Quantum Mechanics and Quantum Mathematics

Quantum Mathematics (QM) suggests that the basic structures of Quantum Mechanics (QM) might reduce to fundamental mathematical and metamathematical structures, and that one even consider the possibility that Quantum Mechanics reduces to Quantum Mathematics with mathematician included or expressing it in a concice manner: QM=QM!

The notes below were stimulated by an observation raising a question about a possible connection between multiverse interpretation of quantum mechanics and quantum mathematics. The heuristic idea of multiverse interpretation is that quantum state repeatedly branches to quantum states which in turn branch again. The possible outcomes of the state function reduction would correspond to different branches of the multiverse so that one could save keep quantum mechanics deterministic if one can give a well-defined mathematical meaning to the branching. Could quantum mathematics allow to somehow realize the idea about repeated branching of the quantum universe? Or at least to identify some analog for it? The second question concerns the identification of the preferred state basis in which the branching occurs.

Quantum Mathematics briefly

Quantum Mathematics replaces numbers with Hilbert spaces and arithmetic operations + and × with direct sum ⊕ and tensor product ⊗.

1. The original motivation comes from quantum TGD where direct sum and tensor product are naturally assigned with the two basic vertices analogous to stringy 3-vertex and 3-vertex of Feynman graph. This suggests that generalized Feynman graphs could be analogous to sequences of arithmetic operations allowing also co-operations of ⊕ and ⊗.
2. One can assign to natural numbers, integers, rationals, algebraic numbers, transcendentals and their p-adic counterparts for various prime p Hilbert spaces with formal dimension given by the number in question. Typically the dimension of these Hilbert spaces in the ordinary sense is infinite. Von Neuman algebras known as hyper-finite factors of type II1 assume as a convention that the dimension of basic Hilbert space is one although it is infinite in the standard sense of the word. Therefore this Hilbert space has sub-spaces with dimension which can be any number in the unit interval. Now however also negative and even complex, quaternionic and octonionic values of Hilbert space dimension become possible.
3. The decomposition to a direct sum matters unlike for abstract Hilbert space as it does also in the case of physical systems where the decomposition to a direct sum of representations of symmetries is standard procedure with deep physical significance. Therefore abstract Hilbert space is replaced with a more structured objects. For instance, the expansion ∑n xnpn of a p-adic number in powers of p defines decomposition of infinite-dimensional Hilbert space to a direct sum ⊕n xn⊗ pn of the tensor products xn⊗ pn. It seems that one must modify the notion of General Coordinate Invariance since number theoretic anatomy distinguishes between the representations of space-time point in various coordinates. The interpretation would be in terms of cognition. For instance, the representation of Neper number requires infinite number of pinary digits whereas finite integer requires onlya finite number of them so that at the level of cognitive representations general coordinate invariance is broken.

Note that the number of elements of the state basis in pn factor is pn and m∈ {0,...,p-1} in the factor xn. Therefore the Hilbert space with dimension pn>xn is analogous to the Hilbert space of a large effectively classical system entangled with the microscopic system characterized by xn. p-Adicity of this Hilbert space in this example is for the purpose of simplicity but raises the question whether the state function reduction is directly related to cognition.

4. On can generalize the concept of real numbers, the notions of manifold, matrix group, etc... by replacing points with Hilbert spaces. For instance, the point (x1,..,xn) of En is replaced with Cartesian product of corresponding Hilbert spaces. What is of utmost importance for the idea about possible connection with the multiverse idea is that also this process can be also repeated indefinitely. This process is analogous to a repeated second quantization since intuitively the replacement means replacing Hilbert space with Hilbert space of wave functions in Hilbert space. The finite dimension and its continuity as function of space-time point must mean that there are strong constraints on these wave functions. What does this decomposition to a direct sum mean at the level of states? Does one have super-selection rules stating that quantum inteference is possible only inside the direct summands?
5. Could one find a number theoretical counterpart for state function reduction and preparation and unitary time evolution? Could zero energy ontology have a formulation at the level of the number theory as earlier experience with infinite primes suggest? The proposal was that zero energy states correspond to ratios of infinite integers which as real numbers reduce to real unit. Could zero energy states correspond to states in the tensor product of Hilbert spaces for which formal dimensions are inverses of each other so that the total space has dimension 1?

p> Unitary process and state function reduction in ZEO

The minimal view about unitary process and state function reduction is provided by ZEO.

1. Zero energy states correspond to a superposition of pairs of positive and negative energy states. The M-matrix defining the entanglement coefficients is product of Hermitian square root of density matrix and unitary S-matrix, and various M-matrices are orthogonal and form rows of a unitary U-matrix. Quantum theory is square root of thermodynamics. This is true even at single particle level. The square root of the density matrix could be also interpreted in terms of finite measurement resolution.
2. It is natural to assume that zero energy states have well-defined single particle quantum numbers at the either end of CD as in particle physics experiment. This means that state preparation has taken place and the prepared end represents the initial state of a physical event. Since either end of CD can be in question, both arrows of geometric time identifiable as the Minkowski time defined by the tips of CD are possible.
3. The simplest identification of the U-matrix is as the unitary U-matrix relating to each other the state basis for which M-matrices correspond to prepared states at two opposite ends of CD. Let us assume that the preparation has taken place at the "lower" end, the initial state. State function reduction for the final state means that one measures the single particle observables for the "upper" end of CD. This necessarily induces the loss of this property at the "lower" end. Next preparation in turn induces localization in the "lower" end. One has a kind of time flip-flop and the breaking of time reversal invariance would be absolutely essential for the non-triviality of the process.
The basic idea of Quantum Mathematics is that M-matrix is characterized by Feynman diagrams representing sequences of arithmetic operations and their co-arithmetic counterparts. The latter ones give rise to a superposition of pairs of direct summands (factors of tensor product) giving rise to same direct sum (tensor product). This vision would reduce quantum physics to generalized number theory. Universe would be calculating and the consciousness of the mathematician would be in the quantum jumps performing the state function reductions to which preparations reduce.

Note that direct sum, tensor product, and the counterpart of second quantization for Hilbert spaces in the proposed sense would be quantum mathematics counterpart for set theoretic operations, Cartesian product and formation of the power set in set theory.

ZEO, state function reduction, unitary process, and quantum mathematics

State function reduction acts in a tensor product of Hilbert spaces. In the p-adic context to be discussed n the following xn⊗ pn is the natural candidate for this tensor product. One can assign a density matrix to a given entangled state of this system and calculate the Shannon entropy. One can also assign to it a number theoretical entropy if entanglement coefficients are rationals or even algebraic numbers, and this entropy can be negative. One can apply Negentropy Maximization Principle to identify the preferred states basis as eigenstates of the density matrix. For negentropic entanglement the quantum jump does not destroy the entanglement.

Could the state function reduction take place separately for each subspace xn⊗ pn in the direct sum ⊕n xn⊗ pn so that one would have quantum parallel state function reductions? This is an old proposal motivated by the many-sheeted space-time. The direct summands in this case would correspond to the contributions to the states localizable at various space-time sheets assigned to different powers of p defing a scale hierarhcy. The powers pn would be associated with zero modes by the previous argument so that the assumption about independent reduction would reflect the super-selection rule for zero modes. Also different values of p-adic prime are present and tensor product between them is possible if the entanglement coefficients are rationals or even algebraics. In the formulation using adeles the needed generalization could be formulated in a straightforward manner.

How can one select the entangled states in the summands xn⊗ pn? Is there some unique choice? How do unitary process and state function reduction relate to this choice? Could the dynamics of Quantum Mathematics be a structural analog for a sequence of state function reductions taking place at the opposite ends of CD with unitary matrix U relating the state basis for which single particle states have well defined quantum numbers either at the upper or lower end of CD? Could the unitary process and state function reduction be identified solely from the requirement that zero energy states correspond to tensor products Hilbert spaces, which correspond to inverses of each other as numbers? Could the extension of arithmetics to include co-arithmetics make the dynamics in question unique?

What multiverse branching could mean?

Could QM allow to identify a mathematical counterpart for the branching of quantum states to quantum states corresponding to preferred basis? Could one can imagine that a superposition of states ∑ cnΨn in a direct summand xn⊗ pn is replaced by a state for which Ψn belong to different direct summands and that branching to non-intefering sub-universes is induced by the proposed super-selection rule or perhaps even induces state function reduction? These two options seem to be equivalent experimentally. Could this decoherence process perhaps correspond to the replacement of the original Hilbert space characterized by number x with a new Hilbert space corresponding to number y inducing the splitting of xn⊗ pn? Could the interpretation of finite integers xn and pn as p-adic numbers p1≠ p induce the decoherence?

This kind of situation is encountered also in symmetry breaking. The irreducible representation of a symmetry group reduces to a direct sum of representations of a sub-group and one has in practice super-selection rule: one does not talk about superpositions of photon and Z0. In quantum measurement the classical external fields indeed induce symmetry breaking by giving different energies for the components of the state. In the case of the factor xn⊗ pn the entanglement coefficients define the density matrix characterizing the preferred state basis. It would seem that the process of branching decomposes this state space to a direct sum 1-D state spaces associated with the eigenstates of the density matrix. In symmetry breaking superposition principle holds true and instead of quantum superposition for different orientations of "Higgs field" or magnetic field a localization selecting single orientation of the "Higgs field" takes place.

Could state function reduction be analogous process? Could non-quantum fluctuating zero modes of WCW metric apper as analogs of "Higgs fields". In this picture quantum superposition of states with different values of zero modes would not be possible, and state function reduction might take place only for entanglement between zero modes and non-zero modes.

The replacement of a point of Hilbert space with Hilbert space as a second quantization

The fractal character of the Quantum Mathematics is what makes it a good candidate for understanding the self-referentiality of consciousness. The replacement of the Hilbert space with the direct sum of Hilbert spaces defined by its points would be the basic step and could be repeated endlessly corresponding to a hierarchy of statements about statements or hierarchy of nth order logics. The construction of infinite primes leads to a similar structure.

What about the step leading to a deeper level in hierarchy and involving the replacement of each point of Hilbert space with Hilbert space characterizing it number theoretically? What could it correspond at the level of states?

1. Suppose that state function reduction selects one point for each Hilbert space xn⊗ pn. The key step is to replace this direct sum of points of these Hilbert spaces with direct sum of Hilbert spaces defined by the points of these Hilbert spaces. After this one would select point from this very big Hilbert space. Could this point be in some sense the image of the Hilbert space state at previous level? Should one imbed Hilbert space xn⊗ pn isometrically to the Hilbert space defined by the preferred state xn⊗ pn so that one would have a realization of holography: part would represent the whole at the new level. It seems that there is a canonical manner to achieve this. The interpretation as the analog of second quantization suggest the identification of the imbedding map as the identification of the many particle states of previous level as single particle states of the new level.
2. Could topological condensation be the counterpart of this process in many-sheeted spacetime of TGD? The states of previous level would be assigned to the space-time sheets topologically condensed to a larger space-time sheet representing the new level and the many-particle states of previous level would be the elementary particles of the new level.
3. If this vision is correct, second quantization performed by theoreticians would not be a mere theoretical operation but a fundamental physical process necessary for cognition! The above proposed unitary imbedding would imbed the states of the previous level as single particle states to the new level. It would seem that the process of second quantization, which is indeed very much like self-reference, is completely independent from state function reduction and unitary process. This picture would conform with the fact that in TGD Universe the theory about the Universe is the Universe and mathematician is in the quantum jumps between different solutions of this theory.
Returning to the motivating question: it seems that the endless branching of the states in multiverse interpretation cannot correspond to a repeated second quantization but could have interpretation as a decoherence identifiable as delocalization in zero modes. If state function is allowed, it corresponds to a localization in zero modes analogous to Higgs mechanism. The Quantum Mathematics realization for a repeated second quantization would represent a genuinely new kind of process which does not reduce to anything already known.

For background see the chapter Quantum Adeles.

### Riemann Hypothesis and Zero Energy Ontology

Ulla mentioned in the comment section of the earlier posting an intervew of Matthew Watkins. The pages of Matthew Watkins about all imaginable topics related to Riemann zeta are excellent and I can only warmly recommend. I was actually in contact with him for years ago and there might be also TGD inspired proposal for strategy proving Riemann hypothesis at the pages of Matthew Watkins.

The interview was very inspiring reading. MW has very profound vision about what mathematics is and he is able to express it in understandable manner. MW tells also about the recent work of Connes applying p-adics and adeles(!) to the problem. I would guess that these are old ideas and I have myself speculated about the connection with p-adics for long time ago.

MW tells in the interview about the thermodynamical interpretation of zeta function. Zeta reduces to a product ζ(s)= ∏pZp(s) of partition functions Zp(s)=1/[1-p-s] over particles labelled by primes p. This relates very closely also to infinite primes and one can talk about Riemann gas with particle momenta/energies given by log(p). s is in general complex number and for the zeros of the zeta one has s=1/2+iy. The imaginary part y is non-rational number. At s=1 zeta diverges and for Re(s)≤1 the definition of zeta as product fails. Physicist would interpret this as a phase transition taking place at the critical line s=1 so that one cannot anymore talk about Riemann gas. Should one talk about Riemann liquid? Or - anticipating what follows- about quantum liquid? What the vanishing of zeta could mean physically? Certainly the thermodynamical interpretation as sum of something interpretable as thermodynamical probabilities apart from normalization fails.

The basic problem with this interpretation is that it is only formal since the temperature parameter is complex. How could one overcome this problem?

1. One could interpret zeta function in the framework of TGD - or rather in zero energy ontology (ZEO) - in terms of square root of thermodynamics! This would make possible the complex analog of temperature. Thermodynamical probabilities would be replaced with probability amplitudes.

2. Thermodynamical probabilities would be replaced with complex probability amplitudes, and Riemann zeta would be the analog of vacuum functional of TGD which is product of exponent of Kähler function - Kähler action for Euclidian regions of space-time surface - and exponent of imaginary Kähler action coming from Minkowskian regions of space-time surface and defining Morse function.

In QFT picture taking into account only the Minkowskian regions of space-time would have only the exponent of this Morse function: the problem is that path integral does not exist mathematically. In thermodynamics picture taking into account only the Euclidian regions of space-time one would only the exponent of Kähler function and would lose interference effects fundamental for QFT type systems.

In quantum TGD both Kähler and Morse are present. With rather general assumptions the imaginary part and real part of exponent of vacuum functional are proportional to each other and to sum over the values of Chern-Simons action for 3-D wormhole throats and for space-like 3-surfaces at the ends of CD. This is non-trivial.

3. Zeros of zeta would in this case correspond to a situation in which the integral of the vacuum functional over the "world of classical worlds" (WCW) vanishes. The pole of ζ at s=1 would correspond to divergence fo the integral for the modulus squared of Kähler function.

What the vanishing of the zeta could mean if one accepts the interpretation quantum theory as a square root of thermodynamics?

1. What could the infinite value of zeta at s=1 mean? The The interpretation in terms of square root of thermodynamics implied following. In zero energy ontology zeta function function decomposition to ∏p Zp(s) corresponds to a product of single particle partition functions for which one can assigns probabilities p-s/Zp(s) to single particle states. This does not make sense physically for complex values of s.

2. In ZEO one can however assume that the complex number p-sn define the entanglement coefficients for positive and negative energy states with energies nlog(p) and -nlog(p): n bosons with energy log(p) just as for black body radiation. The sum over amplitudes over over all combinations of these states with some bosons labelled by primes p gives Riemann zeta which vanishes at critical line if RH holds.

3. One can also look for the values of thermodynamical probabilities given by |p-ns|2= p-n at critical line. The sum over these gives for given p the factor p/(p-1) and the product of all these factors gives ζ (1)=∞. Thermodynamical partition function diverges. The physical interpretation is in terms of Bose-Einstein condensation.

4. What the vanishing of the trace for the matrix coding for zeros of zeta defined by the amplitudes is physically analogous to the statement ∫ Ψ dV=0 and is indeed true for many systems such as hydrogen atom. But what this means? Does it say that the zero energy state is orthogonal to vacuum state defined by unit matrix between positive and negative energy states? In any case, zeros and the pole of zeta would be aspects of one and same thing in this interpretation. This is an something genuinely new and an encouraging sign. Note that in TGD based proposal for a strategy for proving Riemann hypothesis, similar condition states that coherent state is orthogonal to a "false" tachyonic vacuum.

5. RH would state in this framework that all zeros of ζ correspond to zero energy states for which the thermodynamical partition function diverges. Another manner to say this is that the system is critical. (Maximal) Quantum Criticality is indeed the key postulate about TGD Universe and fixes the Kähler coupling strength characterizing the theory uniquely (plus possible other free parameters). Quantum Criticality guarantees that the Universe is maximally complex. Physics as generalized number theory would suggest that also number theory is quantum critical! When the sum over numbers proportional to propabilities diverges, the probabilities are considerably different from zero for infinite number of states. At criticality the presence of fluctuations in all scales implying fractality indeed implies this. A more precise interpretation is in terms of Bose-Eisntein condensation.

6. The postulate that all zero energy states for Riemann system are zeros of zeta and critical in the sense being non-normalizable combined with the fact that s=1 is the only pole of zeta implies that the all zeros correspond to Re(s)=1/2 so that RH follows from purely physical assumptions. The behavior at s=1 would be an essential element of the argument. Note that in ZEO coherent state property is in accordance with energy conservation. In the case of coherent states of Cooper pairs same applies to fermion number conservation.

With this interpretation the condition would state orthogonality with respect to the coherent zero energy state characterized by s=0, which has finite norm and does not represent Bose-Einstein condensation. This could give a connection with the proposal for the strategy for proving Riemann Hypothesis by replacing eigenstates of energy with coherent states and the two approaches could be unified. Note that in this approach conformal invariance for the spectrum of zeros of zeta is the axiom yielding RH and could be seen as counterpart for the fundamental role of conformal invariance in modern physics and indeed very natural in the vision about physics as generalized number theory.

For background see the chapter Riemann Hypothesis and Physics.

### p-Adic homology and finite measurement resolution

Discretization in dimension D in terms of pinary cutoff means division of the manifold to cube-like objects. What suggests itself is homology theory defined by the measurement resolution and by the fluxes assigned to the induced Kähler form.

1. One can introduce the decomposition of n-D sub-manifold of the imbedding space to n-cubes by n-1-planes for which one of the coordinates equals to its pinary cutoff. The construction works in both real and p-adic context. The hyperplanes in turn can be decomposed to n-1-cubes by n-2-planes assuming that an additional coordinate equals to its pinary cutoff. One can continue this decomposition until one obtains only points as those points for which all coordinates are their own pinary cutoffs. In the case of partonic 2-surfaces these points define in a natural manner the ends of braid strands. Braid strands themselves could correspond to the curves for which two coordinates of a light-like 3-surface are their own pinary cutoffs.

2. The analogy of homology theory defined by the decomposition of the space-time surface to cells of various dimensions is suggestive. In the p-adic context the identification of the boundaries of the regions corresponding to given pinary digits is not possible in purely topological sense since p-adic numbers do not allow well-ordering. One could however identify the boundaries sub-manifolds for which some number of coordinates are equal to their pinary cutoffs or as inverse images of real boundaries. This might allow to formulate homology theory to the p-adic context.

3. The construction is especially interesting for the partonic 2-surfaces. There is hierarchy in the sense that a square like region with given first values of pinary digits decompose to p square like regions labelled by the value 0,...,p-1 of the next pinary digit. The lines defining the boundaries of the 2-D square like regions with fixed pinary digits in a given resolution correspond to the situation in which either coordinate equals to its pinary cutoff. These lines define naturally edges of a graph having as its nodes the points for which pinary cutoff for both coordinates equals to the actual point.

4. I have proposed earlier kenociteallb/categorynew what I have called symplectic QFT involving a triangulation of the partonic 2-surface. The fluxes of the induced Kähler form over the triangles of the triangulation and the areas of these triangles define symplectic invariants, which are zero modes in the sense that they do not contribute to the line element of WCW although the WCW metric depends on these zero modes as parameters. The physical interpretation is as non-quantum fluctuating classical variables. The triangulation generalizes in an obvious manner to quadrangulation defined by the pinary digits. This quadrangulation is fixed once internal coordinates and measurement accuracy are fixed. If one can identify physically preferred coordinates - say by requiring that coordinates transform in simple manner under isometries - the quadrangulation is highly unique.

5. For 3-surfaces one obtains a decomposition to cube like regions bounded by regions consisting of square like regions and Kähler magnetic fluxes over the squares define symplectic invariants. Also Kähler Chern-Simons invariant for the 3-cube defines an interesting almost symplectic invariant. 4-surface decomposes in a similar manner to 4-cube like regions and now instanton density for the 4-cube reducing to Chern-Simons term at the boundaries of the 4-cube defines symplectic invariant. For 4-surfaces symplectic invariants reduce to Chern-Simons terms over 3-cubes so that in this sense one would have holography. The resulting structure brings in mind lattice gauge theory and effective 2-dimensionality suggests that partonic 2-surfaces are enough.

The simplest realization of this homology theory in p-adic context could be induced by canonical identification from real homology. The homology of p-adic object would the homology of its canonical image.

1. Ordering of the points is essential in homology theory. In p-adic context canonical identification x=∑ xnpn→ ∑ xnp-n map to reals induces this ordering and also boundary operation for p-adic homology can be induced. The points of p-adic space would be represented by n-tuples of sequences of pinary digits for n coordinates. p-Adic numbers decompose to disconnected sets characterized by the norm p-n of points in given set. Canonical identification allows to glue these sets together by inducing real topology. The points pn and (p-1)(1+p+p2+...)pn+1 having p-adic norms p-n and p-n-1 are mapped to the same real point p-n under canonical identification and therefore the points pn and (p-1)(1+p+p2+...)pn+1 can be said to define the endpoints of a continuous interval in the induced topology although they have different p-adic norms. Canonical identification induces real homology to the p-adic realm. This suggests that one should include canonical identification to the boundary operation so that boundary operation would be map from p-adicity to reality.

2. Interior points of p-adic simplices would be p-adic points not equal to their pinary cutoffs defined by the dropping of the pinary digits corresponding pn, n>N. At the boundaries of simplices at least one coordinate would have vanishing pinary digits for pn, n>N. The analogs of n-1 simplices would be the p-adic points sets for which one of the coordinates would have vanishing pinary digits for pn, n>N. n-k-simplices would correspond to points sets for which k coordinates satisfy this condition. The formal sums and differences of these sets are assumed to make sense and there is natural grading.

3. Could one identify the end points of braid strands in some natural manner in this cohomology? Points with n≤ N pinary digits are closed elements of the cohomology and homologically equivalent with each other if the canonical image of the p-adic geometric object is connected so that there is no manner to identify the ends of braid strands as some special points unless the zeroth homology is non-trivial. In kenociteallb/agg it was proposed that strand ends correspond to singular points for a covering of sphere or more general Riemann surface. At the singular point the branches of the covering would co-incide.

The obvious guess is that the singular points are associated with the covering characterized by the value of Planck constant. As a matter fact, the original assumption was that all points of the partonic 2-surface are singular in this sense. It would be however enough to make this assumption for the ends of braid strands only. The orbits of braid strands and string world sheet having braid strands as its boundaries would be the singular loci of the covering.

For background see the chapter Quantum Adeles.

### Hilbert p-adics, hierarchy of Planck constants, and finite measurement resolution

The hierarchy of Planck constants assigns to the N-fold coverings of the imbedding space points N-dimensional Hilbert spaces. The natural identification of these Hilbert spaces would be as Hilbert spaces assignable to space-time points or with points of partonic 2-surfaces. There is however an objection against this identification.

1. The dimension of the local covering of imbedding space for the hierarchy of Planck constants is constant for a given region of the space-time surface. The dimensions of the Hilbert space assignable to the coordinate values of a given point of the imbedding space are defined by the points themselves. The values of the 8 coordinates define the algebraic Hilbert space dimensions for the factors of an 8-fold Cartesian product, which can be integer, rational, algebraic numbers or even transcendentals and therefore they vary as one moves along space-time surface.

2. This dimension can correspond to the locally constant dimension for the hierarchy of Planck constants only if one brings in finite measurement resolution as a pinary cutoff to the pinary expansion of the coordinate so that one obtains ordinary integer-dimensional Hilbert space. Space-time surface decomposes into regions for which the points have same pinary digits up to pN in the p-adic case and down to p-N in the real context. The points for which the cutoff is equal to the point itself would naturally define the ends of braid strands at partonic 2-surfaces at the boundaries of CD:s.

3. At the level of quantum states pinary cutoff means that quantum states have vanishing projections to the direct summands of the Hilbert spaces assigned with pinary digits pn, n>N. For this interpretation the hierarchy of Planck constants would realize physically pinary digit representations for number with pinary cutoff and would relate to the physics of cognition.

One of the basic challenges of quantum TGD is to find an elegant realization for the notion of finite measurement resolution. The notion of resolution involves observer in an essential manner and this suggests that cognition is involved. If p-adic physics is indeed physics of cognition, the natural guess is that p-adic physics should provide the primary realization of this notion.

The simplest realization of finite measurement resolution would be just what one would expect it to be except that this realization is most natural in the p-adic context. One can however define this notion also in real context by using canonical identification to map p-adic geometric objets to real ones.

Does discretization define an analog of homology theory?

Discretization in dimension D in terms of pinary cutoff means division of the manifold to cube-like objects. What suggests itself is homology theory defined by the measurement resolution and by the fluxes assigned to the induced Kähler form.

1. One can introduce the decomposition of n-D sub-manifold of the imbedding space to n-cubes by n-1-planes for which one of the coordinates equals to its pinary cutoff. The construction works in both real and p-adic context. The hyperplanes in turn can be decomposed to n-1-cubes by n-2-planes assuming that an additional coordinate equals to its pinary cutoff. One can continue this decomposition until one obtains only points as those points for which all coordinates are their own pinary cutoffs. In the case of partonic 2-surfaces these points define in a natural manner the ends of braid strands. Braid strands themselves could correspond to the curves for which two coordinates of a light-like 3-surface are their own pinary cutoffs.

2. The analogy of homology theory defined by the decomposition of the space-time surface to cells of various dimensions is suggestive. In the p-adic context the identification of the boundaries of the regions corresponding to given pinary digits is not possible in purely topological sense since p-adic numbers do not allow well-ordering. One could however identify the boundaries sub-manifolds for which some number of coordinates are equal to their pinary cutoffs or as inverse images of real boundaries. This might allow to formulate homology theory to the p-adic context.

3. The construction is especially interesting for the partonic 2-surfaces. There is hierarchy in the sense that a square like region with given first values of pinary digits decompose to p square like regions labelled by the value 0,...,p-1 of the next pinary digit. The lines defining the boundaries of the 2-D square like regions with fixed pinary digits in a given resolution correspond to the situation in which either coordinate equals to its pinary cutoff. These lines define naturally edges of a graph having as its nodes the points for which pinary cutoff for both coordinates equals to the actual point.

4. I have proposed earlier what I have called symplectic QFT involving a triangulation of the partonic 2-surface. The fluxes of the induced Kähler form over the triangles of the triangulation and the areas of these triangles define symplectic invariants, which are zero modes in the sense that they do not contribute to the line element of WCW although the WCW metric depends on these zero modes as parameters. The physical interpretation is as non-quantum fluctuating classical variables. The triangulation generalizes in an obvious manner to quadrangulation defined by the pinary digits. This quadrangulation is fixed once internal coordinates and measurement accuracy are fixed. If one can identify physically preferred coordinates - say by requiring that coordinates transform in simple manner under isometries - the quadrangulation is highly unique.

5. For 3-surfaces one obtains a decomposition to cube like regions bounded by regions consisting of square like regions and Kähler magnetic fluxes over the squares define symplectic invariants. Also Kähler Chern-Simons invariant for the 3-cube defines an interesting almost symplectic invariant. 4-surface decomposes in a similar manner to 4-cube like regions and now instanton density for the 4-cube reducing to Chern-Simons term at the boundaries of the 4-cube defines symplectic invariant. For 4-surfaces symplectic invariants reduce to Chern-Simons terms over 3-cubes so that in this sense one would have holography. The resulting structure brings in mind lattice gauge theory and effective 2-dimensionality suggests that partonic 2-surfaces are enough.

Does the notion of manifold in finite measurement resolution make sense?

A modification of the notion of manifold taking into account finite measurement resolution might be useful for the purposes of TGD.

1. The chart pages of the manifold would be characterized by a finite measurement resolution and effectively reduce to discrete point sets. Discretization using a finite pinary cutoff would be the basic notion. Notions like topology, differential structure, complex structure, and metric should be defined only modulo finite measurement resolution. The precise realization of this notion is not quite obvious.

2. Should one assume metric and introduce geodesic coordinates as preferred local coordinates in order to achieve general coordinate invariance? Pinary cutoff would be posed for the geodesic coordinates. Or could one use a subset of geodesic coordinates for δ CD× CP2 as preferred coordinates for partonic 2-surfaces? Should one require that isometries leave distances invariant only in the resolution used?

3. A rather natural approach to the notion of manifold is suggested by the p-adic variants of symplectic spaces based on the discretization of angle variables by phases in an algebraic extension of p-adic numbers containing nth root of unity and its powers. One can also assign p-adic continuum to each root of unity (see this). This approach is natural for compact symmetric Kähler manifolds such as S2 and CP2. For instance, CP2 allows a coordinatization in terms of two pairs (Pk,Qk) of Darboux coordinates or using two pairs (ξkk*), k=1,2, of complex coordinates. The magnitudes of complex coordinates would be treated in the manner already described and their phases would be described as roots of unity. In the natural quadrangulation defined by the pinary cutoff for |ξk| and by roots of unity assigned with their phases, Kähler fluxes would be well-defined within measurement resolution. For light-cone boundary metrically equivalent with S2 similar coordinatization using complex coordinates (z,z*) is possible. Light-like radial coordinate r would appear only as a parameter in the induced metric and pinary cutoff would apply to it.

Hierachy of finite measurement resolutions and hierarchy of p-adic normal Lie groups

The formulation of quantum TGD is almost completely in terms of various symmetry group and it would be highly desirable to formulate the notion of finite measurement resolution in terms of symmetries.

1. In p-adic context any Lie-algebra g with p-adic integers as coefficients has a natural grading based on the p-adic norm of the coefficient just like p-adic numbers have grading in terms of their norm. The sub-algebra gN with the norm of coefficients not larger than p-N is an ideal of the algebra since one has [gM,gN]⊂ gM+N: this has of course direct counterpart at the level of p-adic integers. gN is a normal sub-algebra in the sense that one has [g,gN]⊂ gN. The standard expansion of the adjoint action ggNg-1 in terms of exponentials and commutators gives that the p-adic Lie group GN=exp(tpgN), where t is p-adic integer, is a normal subgroup of G=exp(tpg). If indeed so then also G/GN is group, and could perhaps be interpreted as a Lie group of symmetries in finite measurement resolution. GN in turn would represent the degrees of freedom not visible in the measurement resolution used and would have the role of a gauge group.

2. The notion of finite measurement resolution would have rather elegant and universal representation in terms of various symmetries such as isometries of imbedding space, Kac-Moody symmetries assignable to light-like wormhole throats, symplectic symmetries of δCD× CP2, the non-local Yangian symmetry, and also general coordinate transformations. This representation would have a counterpart in real context via canonical identification I in the sense that A→ B for p-adic geometric objects would correspond to I(A)→ I(B) for their images under canonical identification. It is rather remarkable that in purely real context this kind of hierarchy of symmetries modulo finite measurement resolution does not exist. The interpretation would be that finite measurement resolution relates to cognition and therefore to p-adic physics.

3. Matrix group G contains only elements of form g=1+O(pm), m≥ 1 and does not therefore involve matrices with elements expressible in terms roots of unity. These can be included, by writing the elements of the p-adic Lie-group as products of elements of above mentioned G with the elements of a discrete group for which the elements are expressible in terms of roots of unity in an algebraic extension of p-adic numbers. For p-adic prime p p:th roots of unity are natural and suggested strongly by quantum arithmetics.

For background see the chapter Quantum Adeles.

### Quantum Mathematics

The comment of Pesla to previous posting contained something relating to the self-referentiality of consciousness and inspired a comment which to my opinion deserves a status of posting. The comment summarizes the recent work to which I have associated the phrase "quantum adeles" but to which I would now prefer to assign the phrase "quantum mathematics".

To my view the self referentiality of consciousness is the real "hard problem". The "hard problem" as it is usually understood is only a problem of dualistic approach. My hunch is that the understanding of self-referentiality requires completely new mathematics with explicitly built-in self-referentiality. During last weeks I have been writing and rewriting chapter about quantum adeles and end up to propose what this new mathematics might be. The latest draft is here .

1. Replace of numbers with Hilbert spaces and + and × with direct sum and tensor product

The idea is to start from arithemetics : + and × for natural numbers and generalize it .

1. The key observation is that + and x have direct sum and tensor product for Hilbert spaces as complete analogs and natural number n has interpretation as Hilbert space dimension and can be mapped to n-dimensional Hilbert space.

So: replace natural numbers n with n-D Hilbert spaces at the first abstraction step. n+m and n×m go to direct sum n⊕m and tensor product n⊗m of Hilbert spaces. You calculate with Hilbert spaces rather than numbers. This induces calculation for Hilbert space states and sum and product are like 3-particle vertices.

2. At second step construct integers (also negative) as pairs of Hilbert spaces (m,n) identifying (m⊕r,n⊕r) and (m,n). This gives what might be called negative dimensional Hilbert spaces! Then take these pairs and define rationals as Hilbert space pairs (m,n) of this kind with (m,n) equivalent to (k⊗m,k⊗n). This gives rise to what might be called m/n-dimensional Hilbert spaces!

3. At the third step construct Hilbert space variants of algebraic extensions of rationals. Hilbert space with dimension sqrt(2) say: this is a really nice trick. After that you can continued with p-adic number fields and even reals: one can indeed understand even what π-dimensional Hilbert space could be!

The essential element in this is that the direct sum decompositions and tensor products would have genuine meaning: infinite-D Hilbert spaces associated with transcendentals would have different decompositions and would not be equivalent. Also in quantum physics decompositions to tensor products and direct sums (say representations of symmetry group) have phyiscal meaning: abstract Hilbert space of infinite dimension is too rough a concept.

4. Do the same for complex numbers, quaternions, and octonions, imbedding space M4×CP2, etc.. The objection is that the construction is not general coordinate invariant. In coordinates in which point corresponds to integer valued coordinate one has finite-D Hilbert space and in coordinates in which coordinates of point correspond to transcendentals one has infinite-D Hilbert space. This makes sense only if one interprets the situation in terms of cognitive representations for points. π is very difficult to represent cognitively since it has infinite number of digits for which one cannot give a formula. "2" in turn is very simple to represent. This suggests interpretation in terms of self-referentiality. The two worlds with different coordinatizations are not equivalent since they correspond to different cognitive contents.

Replace also the coordinates of points of Hilbert spaces with Hilbert spaces again and again!

The second key observation is that one can do all this again but at new level. Replace the numbers defining vectors of the Hilbert spaces (number sequences) assigned to numbers with Hilbert spaces! Continue ad infinitum by replacing points with Hilbert spaces again and again.

You get sequence of abstractions, which would be analogous to a hierarchy of n:th order logics. At lowest levels would be just predicate calculus: statements like 4=22. At second level abstractions like y=x2. At next level collections of algebraic equations, etc....

Connection with infinite primes and endless second quantization

This construction is structurally very similar to - if not equivalent with - the construction of infinite primes which corresponds to repeated second quantization in quantum physics. There is also a close relationship to - maybe equivalence with - what I have called algebraic holography or number theoretic Brahman=Atman identity. Numbers have infinitely complex anatomy not visible for physicist but necessary for understanding the self referentiality of consciousness and allowing mathematical objects to be holograms coding for mathematics. Hilbert spaces would be the DNA of mathematics from which all mathematical structures would be built!

Generalized Feynman diagrams as mathematical formulas?

I did not mention that one can assign to direct sum and tensor product their co-operations and sequences of mathematical operations are very much like generalized Feynman diagrams. Co-product for instance would assign to integer m all its factorizations to a product of two integers with some amplitude for each factorization. Same for co-sum. Operation and co-operation would together give meaning to 3-particle vertex. The amplitudes for the different factorizations must satisfy consistency conditions: associativity and distributivity might give constraints to the couplings to different channels- as particle physicist might express it.

The proposal is that quantum TGD is indeed quantum arithmetics with product and sum and their co-operations. Perhaps even something more general since also quantum logics and quantum set theory could be included! Generalized Feynman diagrams would correspond to formulas and sequences of mathematical operations with stringy 3-vertex as fusion of 3 -surfaces corresponding to ⊕ and Feynmannian 3-vertex as gluing of 3-surfaces along their ends, which is partonic 2-surface, corresponding to ⊗! One implication is that all generalized Feynman diagrams would reduce to a canonical form without loops and incoming/outgoing legs could be permuted. This is actually a generalization of old fashioned string model duality symmetry that I proposed years ago but gave it up as too "romantic": see this.

For details see the new chapter Quantum Adeles.

I have been working last weeks with quantum adeles. This has involved several wrong tracks and about five days ago a catastrophe splitting the chapter "Quantum Adeles" to two pieces entitled "Quantum Adeles" and "About Absolute Galois Group" took place, and simplified dramatically the view about what adeles are and led to the notion of quantum mathematics. At least now the situation seems to be settled down and I see no signs about possible new catastrophes. I glue the abstract of the re-incarnated "Quantum Adeles" below.

Quantum arithmetics provides a possible resolution of a long-lasting challenge of finding a mathematical justification for the canonical identification mapping p-adics to reals playing a key role in TGD - in particular in p-adic mass calculations. p-Adic numbers have p-adic pinary expansions ∑ anpn satisfying an<p. of powers pn to be products of primes p1<p satisfying an<p for ordinary p-adic numbers. One could map this expansion to its quantum counterpart by replacing an with their counterpart and by canonical identification map p→ 1/p the expansion to real number. This definition might be criticized as being essentially equivalent with ordinary p-adic numbers since one can argue that the map of coefficients an to their quantum counterparts takes place only in the canonical identification map to reals.

One could however modify this recipe. Represent integer n as a product of primes l and allow for l all expansions for which the coefficients an consist of primes p1<p but give up the condition an<p. This would give 1-to-many correspondence between ordinary p-adic numbers and their quantum counterparts.

It took time to realize that l<p condition might be necessary in which case the quantization in this sense - if present at all - could be associated with the canonical identification map to reals. It would correspond only to the process taking into account finite measurement resolution rather than replacement of p-adic number field with something new, hopefully a field. At this step one might perhaps allow l>p so that one would obtain several real images under canonical identification.

This did not however mean giving up the notion of the idea of generalizing number concept. One can replace integer n with n-dimensional Hilbert space and sum + and product × with direct sum ⊕ and tensor product ⊗ and introduce their co-operations, the definition of which is highly non-trivial.

This procedure yields also Hilbert space variants of rationals, algebraic numbers, p-adic number fields, and even complex, quaternionic and octonionic algebraics. Also adeles can be replaced with their Hilbert space counterparts. Even more, one can replace the points of Hilbert spaces with Hilbert spaces and repeat this process, which is very similar to the construction of infinite primes having interpretation in terms of repeated second quantization. This process could be the counterpart for construction of nth order logics and one might speak of Hilbert or quantum mathematics. The construction would also generalize the notion of algebraic holography and provide self-referential cognitive representation of mathematics.

This vision emerged from the connections with generalized Feynman diagrams, braids, and with the hierarchy of Planck constants realized in terms of coverings of the imbedding space. Hilbert space generalization of number concept seems to be extremely well suited for the purposes of TGD. For instance, generalized Feynman diagrams could be identifiable as arithmetic Feynman diagrams describing sequences of arithmetic operations and their co-operations. One could interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD (stringy vertices and vertices of Feynman diagrams). The definition of co-operations would characterize quantum dynamics. Physical states would correspond to the Hilbert space states assignable to numbers. One prediction is that all loops can be eliminated from generalized Feynman diagrams and diagrams are in projective sense invariant under permutations of incoming (outgoing legs).

I glue also the abstract for the second chapter "About Absolute Galois" group which came out from the catastrophe. The reason for the splitting out was that the question whether Absolute Galois group might be isomorphic with the analog of Galois group assigned to quantum p-adics ceased to make sense.

Absolute Galois Group defined as Galois group of algebraic numbers regarded as extension of rationals is very difficult concept to define. The goal of classical Langlands program is to understand the Galois group of algebraic numbers as algebraic extension of rationals - Absolute Galois Group (AGG) - through its representations. Invertible adeles -ideles - define Gl1 which can be shown to be isomorphic with the Galois group of maximal Abelian extension of rationals (MAGG) and the Langlands conjecture is that the representations for algebraic groups with matrix elements replaced with adeles provide information about AGG and algebraic geometry.

I have asked already earlier whether AGG could act is symmetries of quantum TGD. The basis idea was that AGG could be identified as a permutation group for a braid having infinite number of strands. The notion of quantum adele leads to the interpretation of the analog of Galois group for quantum adeles in terms of permutation groups assignable to finite l braids. One can also assign to infinite primes braid structures and Galois groups have lift to braid groups (see this).

Objects known as dessins d'enfant provide a geometric representation for AGG in terms of action on algebraic Riemann surfaces allowing interpretation also as algebraic surfaces in finite fields. This representation would make sense for algebraic partonic 2-surfaces, and could be important in the intersection of real and p-adic worlds assigned with living matter in TGD inspired quantum biology, and would allow to regard the quantum states of living matter as representations of AGG. Adeles would make these representations very concrete by bringing in cognition represented in terms of p-adics and there is also a generalization to Hilbert adeles.

For details see the new chapters Quantum Adeles and About Absolute Galois Group.

1. The work with quantum p-adics leads to the notion of arithmetic Feynman diagrams with +q and ×q representing the vertices of diagrams and having interpretation in terms of direct sum and tensor product. These vertices correspond to TGD counterparts of stringy 3-vertex and Feynman 3-vertex. If generalized Feynman diagrams satisfy the rules of quantum arithmetics, all loops can be eliminated by move representing the basic rules of arithmetics and the diagrams are invariant under permutations of outgoing resp. incoming legs and incoming legs involve only vertices and outgoing legs only co-vertices in the canonical representation of the generalized Feynman diagram. Possible modifications are possible and would be due to braiding meaning that the exchange of particles is not a mere permutation represented trivially. These symmetries are consistent with the prediction of zero energy ontology that virtual particles are pairs of on mass shell massless particles. The kinetic mass shell constraints indeed imply enormous reduction in the number of allowed diagrams. This means also a far reaching generalization of the duality symmetry of the old fashioned hadronic string model. I proposed this idea for years ago but gave it up as too "romantic".

2. A beautiful connection with infinite primes emerges and p-adic primes characterizes collections of collections .... of quantum rationals which describe quantum dimensions of pairs of Hilbert spaces assignable to time-like and space-like braids ending at partonic 2-surfaces.

3. The interpretation for the decomposition of quantum p-adic integers to quantum p-adic prime factors is in terms of a tensor product decomposition to quantum Hilbert spaces with quantum prime dimensions lq and can be related to the singular coverig spaces of imbedding allowing to describe the many-valuedness of the normal derivatives of imbedding space coordinates at space-like ends of space-time sheets at boundaries of CD and at lightlike wormhole throats. The further direction sum decompositions corresponding to different quantum p-adic primes assignable to l>p and represented by various quantum primes lq projecting to l in turn have interpretation in terms of p-adicity. The decomposition of n to primes corresponds to braid with strands labeled by primes representing Hilbert space dimensions.

4. This gives a connection with the hierarchy of Planck constants and dark matter and quantum arithmetics. The strands of braid labeled by l decompose to strands correponding to the different sheets of covering associated with the singular covering of imbedding space: here one has however quantum direct sum decomposition meaning that particles are delocalized in the fiber of the covering.

The conservation of number theoretic multiplicative momenta at ×q vertex allows to deduce the selection rules telling what happens in vertices inolving particles with different values of Planck constant. There are two options depending on whether r= hbar/hbar0 satisfies r=n or r=n+1, where n characterizes the Hilbert space- dimension assignable to the covering of the imbedding space. For both options one can imagine a concrete phase transition leading from in-animate matter to living matter interpreted in terms of phases with non-standard value of Planck constant.

Consider now the two little observations.

1. The first little observation is that these selection rules mean a deviation of the earlier proposal that only particles with same values of Planck constant can appear in a given vertex. This assumption explains why dark matter identified as phases with non-standard value of Planck constant decouples from ordinary matter at vertices. This explanation is however not lost albeit being weakened. If ×q vertex contains two particles with r=n+1 for r=n option (r=1 or 2 for r=n+1 option), also the third particle has ordinary value of Planck constant so that ordinary matter effectively decouples from dark matter. For +q vertex the decoupling of the ordinary from dark matter occurs for r=n+1 option but not for r=n option. Hence r=n+1 could explain the virtual decoupling of dark and ordinary matter from each other.

2. Second little observation relates to the inclusions of hyper-finite factors which should relate closely to quantum p-adic primes because finite measurement resolution should be describable by HFFs. For prime p=2 one obtains quantum dimension 2q= 2cos(2π/n) in the most general case: n=p corresponds to p-adicity and more general values fo n-adicity. The interesting observation concerns the quantum dimension [M:N] obtained as quantum factor space M/N for Jones inclusion of hyper-finite factor of type I1 with N interpreted as an algebra creating states not distinguishable from each other in the measurement resolution used. This quantum dimension is 2q2 and has interpretation as dimension of 2× 2 quantum matrix algebra. This observation suggests the existence of infinite hierarchy of inclusions with [M:N]= pq2 labelled by primes p. The integer n would correspond to n-adicity meaning p-adicity for factors of n.

For details see the new chapter Quantum Adeles of "Physics as Generalized Number Theory".

### Progress in number theoretic vision about TGD

During last weeks I have been writing a new chapter Quantum Adeles. The key idea is the generalization of p-adic number fields to their quantum counterparts and they key problem is what quantum p-adics and quantum adeles mean. Second key question is how these notions relate to various key ideas of quantum TGD proper. The new chapter gives the details: here I just list the basic ideas and results.

The first guess is that one obtains quantum p-adics from p-adic integers by decomposing them to products of primes l first and after then expressing the primes l in all possible manners as power series of p by allowing the coefficients to be also larger than p but containing only prime factors p1<p. In the decomposition of coefficients to primes p1<p these primes are replaced with quantum primes assignable to p.

One could pose the additional condition that coefficients are smaller than pN and decompose to products of primes l<pN mapped to quantum primes assigned with q= exp(i2π/pN). The interpretation would be in terms of pinary cutoff. For N=1 one would obtain the counterpart of p-adic numbers. For N>1 this correspondence assigns to ordinary p-adic integer larger number of quantum p-adic integers and one can define a natural projection to the ordinary p-adic integer and its direct quantum counterpart with coefficients ak<p in pinary expansion so that a covering space of p-adics results. One expects also that it is possible to assign what one could call quantum Galois group to this covering and the crazy guess is that it is isomorphich with the Absolute Galois Group defined as Galois group for algebraic numbers as extension of rationals.

One must admit that the details are not fully clear yet here. For instance, one can consider quantum p-adics defined in power series of pN with coefficients an<pN and expressed as products of quantum primes l<pN. Even in the case that only N=1 option works the work has left to surprisingly detailed understanding of the relationship between different pieces of TGD.

This step is however not enough for quantum p-adics.

1. The first additional idea is that one replaces p-adic integers with wave functions in the covering spaces associated with the prime factors l of integers n. This delocalization would give a genuine content for the attribute "quantum" as it does in the case of electron in hydrogen atom.

The natural physical interpretation for these wave functions would be as cognitive representations of the quantum states in matter sector so that momentum, spin and various internal quantum numbers would find cognitive representation in quantum Galois degrees of freedom.

One could talk about self-reference: the unseen internal degrees of freedom associated with p-adic integers would make it possible to represent physical information. Also the ratios of infinite primes reducing to unity give rise to similar but infinite-dimensional number theoretical anatomy of real numbers and leads to what I call Brahman=Atman identity.

2. Second additional idea is to replace numbers with sequences of arithmetic operations that is quantum sum +q and quantum product ×q represented as fundamental 3-vertices and to formulate basic laws of arithmetics as symmetries of these vertices give rise to additional selection rules from natural additional symmetry conditions. These sequences of arithmetics with sets of integers as inputs and outputs are analogous to Feynman diagrams and the factorization of integers to primes has the decomposition of braid to braid strands as a direct correlate. One can also group incoming integers to sub-groups and the hierarchy of infinite primes describes this grouping.

A beautiful physical interpretation for the number theoretic Feynman diagrams emerges.

1. The decomposition of integers m and n of a quantum rational m/n to products of primes l correspond to the decomposition of two braids to braid strands labeled by primes l. TGD predicts both time-like and space-like braids having their ends at partonic 2-surfaces. These two kinds of braids would naturally correspond to the two co-prime integers defining quantum rational m/n.

2. The two basic vertices +q and ×q correspond to the fusion vertex for stringy diagrams and 3-vertex for Feynman diagrams: both vertices have TGD counterparts and correspond at Hilbert space level direct sum and tensor product. Note that the TGD inspired interpretation of +q (direct sum) is different from string model interpretation (tensor product). The incoming and outgoing integers in the Feynman diagram corresponds to Hilbert space dimensions and the decomposition to prime factors corresponds to the decomposition of Hilbert space to prime Hilbert spaces as tensor factors.

3. Ordinary arithmetic operations have interpretation as tensor product and direct sum and one can formulate associativity, commutativity, and distributivity as well as product and sum as conditions on Feynman diagrams. These conditions imply that all loops can be transformed away by basic moves so that diagram reduces to a diagram obtained by fusing only sum and product to initial state to produce single line which decays to outgoing states by co-sum and co-product. Also the incoming lines attaching to same line can be permuted and permutation can only induce a phase factor. The conjecture that these rules hold true also for the generalized Feynman diagrams is obviously extremely powerful and consistent with the picture provided by zero energy ontology. Also connection with twistor approach is suggestive.

4. Quantum adeles for ordinary rationals can be defined as Cartesian products of quantum p-adics and of reals or rationals. For algebraic extensions of rationals similar definition applies but allowing only those p-adic primes which do not split to a product of primes or the extension. Number theoretic evolution means increasing dimension for the algebraic extension of rationals and this means that increasing number of p-adic primes drops from the adele. This means a selective pressure under which only the fittest p-adic primes survive. The basic question is why Mersenne primes and some primes near powers of two are survivors.

The connection with infinite primes

A beautiful connection with the hierarchy of infinite primes emerges.

1. The simplest infinite primes at the lowest level of hierarchy define two integers having no common prime divisors and thus defining a rational number having interpretation in terms of time-like and space-like braids characterized by co-prime integers.

2. Infinite primes at the lowest level code for algebraic extensions of rationals so that the infinite primes which are survivors in the evolution dictate what p-adic primes manage to avoid splitting. Infinite primes coding for algebraic extensions have interpretation as bound states and the most stable bound states and p-adic primes able to resist corresponding splitting pressures survive.

At the n:th level of the hierarchy of infinite primes correspond to monic polynomials of n variables constructed from prime polymomials of n-1 variables constructed from.... The polynomials of single variable are in 1-1 correspondence with ordered collections of n rationals. This collection corresponds to n pairs of time-like and space-like braids. Thus infinite primes code for collections of lower level infinite primes coding for... and eventually everything boils down to collections rational coefficients for monic polynomials coding for infinite primes at the lowest level of the hierarchy. In generalized Feynman diagrams this would correspond to groups of groups of .... of groups of integers of incoming and outgoing lines.

3. The physical interpretation is in terms of pairs time-like and space-like braids having ends at partonic 2-surfaces with strands labelled by primes and defining as their product integer: the rational is the ratio of these integers. From these basic braids one can form collections of braid pairs labelled by infinite primes at the second level of hierarchy, and so on and a beautiful connection with the earlier vision about infinite primes as coders of infinite hierarchy of braids of braids of... emerges. Space-like and time-like braids playing key role in generalized Feynman diagrams and representing rationals supporting the interpretation of generalized Feynman diagrams as arithmetic Feynman diagrams. The connection with many-sheeted space-time in which sheets containing smaller sheet define higher level particles, emerges too.

4. Number theoretic dynamics for ×q conserves the total numbers of prime factors so that one can either talk about infinite number of conserved number theoretic momenta coming as multiples of log(p), p prime or of particle numbers assignable to primes p: pn corresponds to n-boson state and finite parts of infinite primes correspond to states with fermion number one for each prime and arbitrary boson number. The infinite parts of infinite primes correspond to fermion number zero in each mode. The two braids could also correspond to braid strands with fermion number 0 and 1. The bosonic and fermionic excitations would naturally correspond the generators of super-conformal algebras assignable to light-like and space-like 3-surfaces.

The interpretation of integers representing particles a Hilbert space dimensions

In number theoretic dynamics particles are labeled by integers decomposing to primes interpreted as labels for braid strands. Both time-like and space-like braids appear. The interpretation of sum and product in terms of direct sum and tensor product implies that these integers must correspond to Hilbert space dimensions. Hilbert spaces indeed decompose to tensor product of prime-dimensional Hilbert spaces stable against further decomposition.

Second natural decomposition appearing in representation theory is into direct sums. This decomposition would take place for prime-dimensional Hilbert spaces with dimension l with dimensions anpn in the p-adic expansion. The replacement of an with quantum integer would mean decomposition of the summand to a tensor product of quantum Hilbert spaces with dimensions which are quantum primes and of pn-dimensional ordinary Hilbert space. This should relate to the finite measurement resolution.

×q vertex would correspond to tensor product and +q to direct sum with this interpretation. Tensor product automatically conserves the number theoretic multiplicative momentum defined by n in the sense that the outgoing Hilbert space is tensor product of incoming Hilbert spaces. For +q this conservation law is broken.

Connection with the hierarchy of Planck constants, dark matter hierarchy, and living matter

The obvious question concerns the interpretation of the Hilbert spaces assignable to braid strands. The hierarchy of Planck constants interpreted in terms of a hierarchy of phases behaving like dark matter suggests the answer here.

1. The enormous vacuum degeneracy of Kähler action implies that the normal derivatives of imbedding space coordinates both at space-like 3 surfaces at the boundaries of CD and at light-like wormhole throats are many-valued functions of canonical momentum densities. Two directions are necessary by strong form of holography implying effective 2-dimensionality so that only partonic 2-surfaces and their tangent space data are needed instead of 3-surfaces. This implies that space-time surfaces can be regarded as surfaces in local singular coverings of the imbedding space. At partonic 2-surfaces the sheets of the coverings co-incide.

2. By strong form of holography there are two integers characterizing the covering and the obvious interpretation is in terms of two integers characterizing infinite primes and time-like and space-like braids decomposing into braids labelled by primes. The braid labelled by prime would naturally correspond to a braid strand and its copies in l points of the covering. The state space defined by amplitudes in the n-fold covering would be n-dimensional and decompose into a tensor product of state spaces with prime dimension. These prime-dimensional state spaces would correspond to wave functions in prime-dimensional sub-coverings.

3. Quantum primes are obtained as different sum decompositions of primes l and correspond direct sum decompositions of l-dimensional state space associated with braid defined by l-fold sub-covering. What suggests itself strongly is a symmetry breaking. This breaking would mean the geometric decomposition of l strands to subsets with numbers of elements coming proportional to powers pn of p. Could anpn in the expression of l as ∑ akpk correspond to a tensor product of an-dimensional space with finite field G(p,n)? Does this decomposition to state functions localized to sub-braids relate to symmetries and symmetry breaking somehow? Why an-dimensional Hilbert space would be replaced with a tensor product of quantum-p1-dimensional Hilbert spaces? The proper understanding of this issue is needed in order to have more rigorous formulation of quantum p-adics.

4. Number theoretical dynamics would therefore relate directly to the hierarchy of Planck constants. This would also dictate what happens for Planck constants in the two vertices. There are two options.

1. For ×q vertex the outgoing particle would have Planck constant, which is product of incoming Planck constants using ordinary Planck constant as unit. For +q vertex the Planck constant would be sum. This stringy vertex would lead to generation of particles with Planck constant larger than its minimum value. For ×q two incoming particles with ordinary Planck constant would give rise to a particle with ordinary Planck constant just as one would expect for ordinary Feynman diagrams.

2. Another possible scenario is the one in which Planck constant is given by hbar/hbar0= n-1. In this case particles with ordinary Planck constant fuse to particles with ordinary Planck constant in both vertices.

For both options the feed of particles with non-standard value of Planck constant to the system can lead to a fusion cascade leading to a generation of dark matter particles with very large value of Planck constant. Large Planck constant means macroscopic quantum phases assumed to be crucial in TGD inspired biology. The obvious proposal is that inanimate matter transforms to living and thus also to dark matter by this kind of phase transition in presence of feed of particles - say photons- with non-standard value of Planck constant.

Summary

The work with quantum p-adics and quantum adeles and generalization of number field concept to quantum number field in the framework of zero energy ontology has led to amazingly deep connections between p-adic physics as physics of cognition, infinite primes, hierarchy of Planck constants, vacuum degeneracy of Kähler action, generalized Feynman diagrams, and braids. The physics of life would rely crucially on p-adic physics of cognition. The optimistic inside me even insists that the basic mathematical structures of TGD are now rather well-understood. This fellow even uses the word "breakthrough" without blushing. I have of course continually admonished him for his reckless exaggerations but in vain.

The skeptic inside me continues to ask how this construction could fail. A possible Achilles heel relates to the detailed definition of the notion of quantum p-adics. For N=1 it reduces essentially to ordinary p-adic number field mapped to reals by quantum variant of canonical identification. Therefore most of the general picture survives even for N=1. What would be lost are wave functions in the space of quantum variants of a given prime and also the crazy conjecture that quantum Galois group is isomorphic to Absolute Galois Group.

For detais see the new chapter Quantum Adeles.

### Progress in understanding of quantum p-adics

Quantum arithmetics is a notion which emerged as a possible resolution of long-lived challenge of finding mathematical justification for the canonical identification mapping p-adics to reals playing key role in p-adic mass calculations. The model for Shnoll effect was the bridge leading to the discovery of quantum arithmetics.

I have been gradually developing the notion of quantum p-adics and during the weekend made quite a step of progress in understanding the concept and dare say that the notion now rests on a sound basis.

1. What quantum arithmetics suggests is a modification of p-adic numbers by replacing p-adic pinary expansions with their quantum counterparts allowing the coefficients of prime powers to be integers not divisible by p. A further important constraint is that the factors of coefficients are primes smaller than p. If the coefficients are smaller than p, one obtains something reducing effectively to ordinary p-adic number field.

2. A further constraint is that quantum integers respect the decomposition of integer to powers of prime. Quantum p-adic integers are to p-adic integers what the integers in the extension of number field are for the number field and one can indeed identify Galois group Gp for each prime p and form adelic counterpart of this group as Cartesian product of all Gp:s.

3. After various trials it turned out (this is what motivated this posting!) that quantum p-adics are indeed quantal in the sense that one can assign to given quantum p-adic integer n a wave function at the orbit of corresponding Galois group decomposing to Galois groups of its prime factors of n.

1. The basic conditions are that ×q and +q satisfy the basic associativity and distributivity laws. These conditions are extremely powerful and can be formulated in terms of number theoretic Feynman diagrams assignable to sequences of arithmetical operations and their co-algebra counterparts. This brings in physical insight.

2. One can interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD, namely stringy vertices in which 3-surface splits and vertices analogous to those of Feynman diagrams in which lines join along their 3-D ends. Only the latter vertices correspond to particle decays and fusions whereas stringy vertices correspond to decay of particle to path and simultaneous propagation along both paths: this is by the way one of the fundamental differences between quantum TGD and string models. This plus the assumption that Galois groups associated with primes define symmetries of the vertices allows to deduce very precise information about the symmetries of the two kinds of vertices needed to satisfy the associativity and distributivity and actually fix them highly uniquely, and therefore determine corresponding zero energy states having collections of integers as counterparts of incoming positive energy (or negative energy) particles.

3. Zero energy ontology leads naturally zero energy states for which time reversal symmetry is broken in the sense that either positive or negative energy part corresponds to a single collection of integers as incoming lines. What is fascinating is the the prime decomposition of integer corresponds to a decomposition of braid to strands. C and P have interpretation as formations of multiplicative and additive inverses of quantum integers and CP=T changes the positive and negative energy parts of the number theoretic zero energy states.

4. This gives strong support for the old conjecture that generalized Feynman diagrams have number theoretic interpretation and allow moves transforming them to tree diagrams - also this generalization of old-fashioned string duality is old romantic idea of quantum TGD, which I however gave up as too "romantic". I noticed the analogy of Feynman diagrams with the algebraic expressions but failed to realize how extremely concrete the connection could be. What was left from the idea were some brief comments in Appendix A: Quantum Groups and Related Structures to one of the chapters of "Towards M-matrix".

The moves for generalized Feynman diagrams would code for associativity and distributivity of quantum arithmetics and we have actually learned them in elementary school as a process simplifying algebraic expressions! Also braidings with strands labeled by the primes dividing the integer emerge naturally so that the connection with quantum TGD proper becomes very strong and consistent with the earlier conjecture inspired by the construction of infinite primes stating that transition amplitudes have purely number theoretic meaning in ZEO.

4. Canonical identification finds a fundamental role in the definition of the norm for both quantum p-adics and quantum adeles. The construction is also consistent with the notion of number theoretic entropy which can have also negative values (this is what makes living systems living!).

5. There are arguments suggesting that quantum p-adics form a field - one might say "quantum field" - so that also differential calculus and even integral calculus would make sense since quantum p-adics inherit almost well-ordering from reals via canonical identification.

6. One can also generalize the construction to algebraic extensions of rationals. In this case the coefficients of quantum adeles are replaced by rationals in the extension and only those p-adic number fields for which the p-adic prime does not split into a product of primes of algebraic extension are kept in the quantum adele associated with rationals. This construction gives first argument in favor of the crazy conjecture that the Absolute Galois group (AGG) is isomorphic with the Galois group of quantum adeles.

To sum up, the vision abut "Physics as generalized number theory" can be also transformed to "Number theory as quantum physics"!

For detais see the new chapter Quantum Adeles.

Quantum arithmetics is a notion which emerged as a possible resolution of long-lived challenge of finding mathematical justification for the canonical identification mapping p-adics to reals playing key role in p-adic mass calculations. The model for Shnoll effect was the bridge leading to the discovery of quantum arithmetics.

1. What quantum arithmetics suggests is a modification of p-adic numbers by replacing p-adic pinary expansions with their quantum counterparts allowing the coefficients of prime powers to be integers not divisible by p.

2. A further constraint is that quantum integers respect the decomposition of integer to powers of prime. Quantum p-adic integers are to p-adic integers what the integers in the extension of number field are for the number field and one can indeed identify Galois group Gp for each prime p and form adelic counterpart of this group as Cartesian product of all Gp:s. After various trials it turned out that quantum p-adics are indeed quantal in the sense that one can assign to given quantum p-adic integer n a wave function at the orbit of corresponding Galois group decomposing to Galois groups of its prime factors of n. The basic conditions are that ×q and +q satisfy the basic associativity and distributivity laws.

One can interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD (stringy vertices and vertices of Feynman diagrams). This allows to deduce very precise information about the symmetries of the vertices needed to satisfy the associativity and distributivity and actually fix them highly uniquely, and therefore determined corresponding zero energy states having collections of integers as counterparts of incoming positive energy (or negative energy) particles.

This gives strong support for the old conjectures that generalized Feynman diagrams have number theoretic interpretation and allow moves transforming them to tree diagrams - also this generalization of old-fashioned string duality is old romantic idea of quantum TGD. The moves for generalized Feynman diagrams would code for associativity and distributivity of quantum arithmetics. Also braidings with strands labelled by the primes dividing the integer emerge naturally so that the connection with quantum TGD proper becomes very strong.

3. Canonical identification finds a fundamental role in the definition of the norm for both quantum p-adics and quantum adeles.

4. There are arguments suggesting that quantum p-adics form a field so that also differential calculus and even integral calculus would make sense since quantum p-adics inherit well-ordering from reals via canonical identification.

The ring of adeles is essentially Cartesian product of different p-adic number fields and reals.

1. The proposal is that adeles can be replaced with quantum adeles. Gp has natural action on quantum adeles allowing to construct representations of Gp. This norm for quantum adeles is the ordinary Hilbert space norm obtained by first mapping quantum p-adic numbers in each factor of quantum adele by canonical identification to reals.

2. Also quantum adeles could form form a field rather than only ring so that also differential calculus and even integral calculus could make sense. This would allow to replace reals by quantum adeles and in this manner to achieve number theoretical universality. The natural applications would be to quantum TGD, in particular to construction of generalized Feynman graphs as amplitudes which have values in quantum adele valued function spaces associated with quantum adelic objects. Quantum p-adics and quantum adeles suggest also solutions to a number of nasty little inconsistencies, which have plagued to p-adicization program.

3. One must of course admit that quantum arithmetics is far from a polished mathematical notion. It would require a lot of work to see whether the dream about associative and distributive function field like structure allowing to construct differential and integral calculus is realized in terms of quantum p-adics and even in terms of quantum adeles. This would provide a realization of number theoretical universality.

Ordinary adeles play a fundamental technical tool in Langlands correspondence. The goal of classical Langlands program is to understand the Galois group of algebraic numbers as algebraic extension of rationals - Absolute Galois Group (AGG) - through its representations. Invertible adeles define Gl1 which can be shown to be isomorphic with the Galois group of maximal Abelian extension of rationals (MAGG) and the Langlands conjecture is that the representations for algebraic groups with matrix elements replaced with adeles provide information about AGG and algebraic geometry.

The crazy question is whether quantum adeles could be isomorphic with algebraic numbers and whether the Galois group of quantum adeles could be isomorphic with AGG or with its commutator group. If so, AGG would naturally act is symmetries of quantum TGD. The connection with infinite primes leads to a proposal what quantum p-adics and quantum adeles associated with algebraic extensions of rationals could be and provides support for the conjecture. The Galois group of quantum p-adic prime p would be isomorphic with the ordinary Galois group permuting the factors in the representation of this prime as product of primes of algebraic extension in which the prime splits.

Objects known as dessins d'enfant provide a geometric representation for AGG in terms of action on algebraic Riemann surfaces allowing interpretation also as algebraic surfaces in finite fields. This representation would make sense for algebraic partonic 2-surfaces, and could be important in the intersection of real and p-adic worlds assigned with living matter in TGD inspired quantum biology, and would allow to regard the quantum states of living matter as representations of AGG. Quantum Adeles would make these representations very concrete by bringing in cognition represented in terms of quantum p-adics.

Quantum Adeles could allow to realize number theoretical universality in TGD framework and would be essential in the construction of generalized Feynman diagrams as amplitudes in the tensor product of state spaces assignable to real and p-adic number fields. Canonical identification would allow to map the amplitudes to reals and complex numbers. Quantum Adeles also provide a fresh view to conjectured M8-M4×CP2 duality, and the two suggested realizations for the decomposition of space-time surfaces to associative/quaternionic and co-associative/co-quaternionic regions.

For detais see the new chapter Quantum Adeles.

### Quantum p-adic deformations of space-time surfaces as a representation of finite measurement resolution?

A mathematically fascinating question is whether one could use quantum arithmetics as a tool to build quantum deformations of partonic 2-surfaces or even of space-time surfaces and how could one achieve this. These quantum space-times would be commutative and therefore not like non-commutative geometries assigned with quantum groups. Perhaps one could see them as commutative semiclassical counterparts of non-commutative quantum geometries just as the commutative quantum groups (see this) could be seen commutative counterparts of quantum groups.

As one tries to develop a new mathematical notion and interpret it, one tends to forget the motivations for the notion. It is however extremely important to remember why the new notion is needed.

1. In the case of quantum arithmetics Shnoll effect is one excellent experimental motivation. The understanding of canonical identification and realization of number theoretical universality are also good motivations coming already from p-adic mass calculations. A further motivation comes from a need to solve a mathematical problem: canonical identification for ordinary p-adic numbers does not commute with symmetries.

2. There are also good e motivations for p-adic numbers? p-Adic numbers and quantum phases can be assigned to finite measurement resolution in length measurement and in angle measurement. This with a good reason since finite measurement resolution means the loss of ordering of points of real axis in short scales and this is certainly one outcome of a finite measurement resolution. This is also assumed to relate to the fact that cognition organizes the world to objects defined by clumps of matter and with the lumps ordering of points does not matter.

3. Why quantum deformations of partonic 2-surfaces (or more ambitiously: space-time surfaces) would be needed? Could they represent convenient representatives for partonic 2-surfaces (space-time surfaces) within finite measurement resolution?

1. If this is accepted there is not compelling need to assume that this kind of space-time surfaces are preferred extremals of Kähler action.

2. The notion of quantum arithmetics and the interpretation of p-adic topology in terms of finite measurement resolution however suggest that they might obey field equations in preferred coordinates but not in the real differentiable structure but in what might be called quantum p-adic differentiable structure associated with prime p.

3. Canonical identification would map these quantum p-adic partonic (space-time surfaces) to their real counterparts in a unique a continuous manner and the image would be real space-time surface in finite measurement resolution. It would be continuous but not differentiable and would not of course satisfy field equations for Kähler action anymore. What is nice is that the inverse of the canonical identification which is two-valued for finite number of pinary digits would not be needed in the correspondence.

4. This description might be relevant also to quantum field theories (QFTs). One usually assumes that minima obey partial differential equations although the local interactions in QFTs are highly singular so that the quantum average field configuration might not even possess differentiable structure in the ordinary sense! Therefore quantum p-adicity might be more appropriate for the minima of effective action.

The conclusion would be that commutative quantum deformations of space-time surfaces indeed have a useful function in TGD Universe.

Consider now in more detail the identification of the quantum deformations of space-time surfaces.

1. Rationals are in the intersection of real and p-adic number fields and the representation of numbers as rationals r=m/n is the essence of quantum arithmetics. This means that m and n are expanded to series in powers of p and coefficients of the powers of p which are smaller than p are replaced by the quantum counterparts. They are quantum quantum counterparts of integers smaller than p. This restriction is essential for the uniqueness of the map assigning to a give rational quantum rationals.

2. One must get also quantum p-adics and the idea is simple: if the pinary expansions of m and n in positive powers of p are allowed o become infinite, one obtains a continuum very much analogous to that of ordinary p-adic integers with exactly the same arithmetics. This continuum can be mapped to reals by canonical identification. The possibility to work with numbers which are formally rationals is utmost importance for achieving the correct map to reals. It is possible to use the counterparts of ordinary pinary expansions in p-adic arithmetics.

3. One can defined quantum p-adic derivatives and the rules are familiar to anyone. Quantum p-adic variants of field equations for Kähler action make sense.

1. One can take a solution of p-adic field equations and by the commutativity of the map r=m/n→ rq=mq/nq and of arithmetic operations replace p-adic rationals with their quantum counterparts in the expressions of quantum p-adic imbedding space coordinates hk in terms of space-time coordinates xα.

2. After this one can map the quantum p-adic surface to a continuous real surface by using the replacement p→ 1/p for every quantum rational. This space-time surface does not anymore satisfy the field equations since canonical identification is not even differentiable. This surface - or rather its quantum p-adic pre-image - would represent a space-time surface within measurement resolution. One can however map the induced metric and induced gauge fields to their real counterparts using canonical identification to get something which is continuous but non-differentiable.

4. This construction works nicely if in the preferred coordinates for imbedding space and partonic (space-time) surface itself the imbedding space coordinates are rational functions of space-time coordinates with rational coefficients of polynomials (also Taylor and Laurent series with rational coefficients could be considered as limits). This kind of assumption is very restrictive but in accordance with the fact that the measurement resolution is finite and that the representative for the space-time surface in finite measurement resolution is to some extent a convention. The use of rational coefficients for the polynomials involved implies that for polynomials of finite degree WCW reduces to a discrete set so that finite measurement resolution has been indeed realized quite concretely!

Consider now how the notion of finite measurement resolution allows to circumvent the objections against the construction.

1. Manifest GCI is lost because the expression for space-time coordinates as quantum rationals is not general coordinate invariant notion unless one restricts the consideration to rational maps and because the real counterpart of the quantum p-adic space-time surface depends on the choice of coordinates. The condition that the space-time surface is represented in terms of rational functions is a strong constraint but not enough to fix the choice of coordinates. Rational maps of both imbedding space and space-time produce new coordinates similar to these provided the coefficients are rational.

2. Different choices for imbedding space and space-time surface lead to different quantum p-adic space-time surface and its real counterpart. This is an outcome of finite measurement resolution. Since one cannot order the space-time points below the measurement resolution, one cannot fix uniquely the space-time surface nor uniquely fix the coordinates used. This implies the loss of manifest general coordinate invariance and also the non-uniqueness of quantum real space-time surface. The choice of coordinates is analogous to gauge choice and quantum real space-time surface preserves the information about the gauge.

For background see chapter Quantum Arithmetics.

### Anatomy of quantum jump in zero energy ontology

Consider now the anatomy of quantum jump identified as a moment of consciousness in the framework of Zero energy ontology (ZEO).

1. Quantum jump begins with unitary process U described by unitary matrix assigning to a given zero energy state a quantum superposition of zero energy states. This would represent the creative aspect of quantum jump - generation of superposition of alternatives.

2. The next step is a cascade of state function reductions proceeding from long to short scales. It starts from some CD and proceeds downwards to sub-CDs to their sub-CDs to ...... At a given step it induces a measurement of the quantum numbers of either positive or negative energy part of the quantum state. This step would represent the measurement aspect of quantum jump - selection among alternatives.

3. The basic variational principle is Negentropy Maximization Principle (NMP) stating that the reduction of entanglement entropy in given quantum jump between two subsystems of CD assigned to sub-CDs is maximal. Mathematically NMP is very similar to the second law although states just the opposite but for individual quantum system rather than ensemble. NMP actually implies second law at the level of ensembles as a trivial consequence of the fact that the outcome of quantum jump is not deterministic.

For ordinary definition of entanglement entropy this leads to a pure state resulting in the measurement of the density matrix assignable to the pair of CDs. For hyper-finite factors of type II1 (HFFs) state function reduction cannot give rise to a pure state and in this case one can speak about quantum states defined modulo finite measurement resolution and the notion of quantum spinor emerges naturally. One can assign a number theoretic entanglement entropy to entanglement characterized by rational (or even algebraic) entanglement probabilities and this entropy can be negative. Negentropic entanglement can be stable and even more negentropic entanglement can be generated in the state function reduction cascade.

The irreversibility is realized as a property of zero energy states (for ordinary positive energy ontology it is realized at the level of dynamics) and is necessary in order to obtain non-trivial U-matrix. State function reduction should involve several parts. First of all it should select the density matrix or rather its Hermitian square root. After this choice it should lead to a state which prepared either at the upper or lower boundary of CD but not both since this would be in conflict with the counterpart for the determinism of quantum time evolution.

Generalization of S-matrix

ZEO forces the generalization of S-matrix with a triplet formed by U-matrix, M-matrix, and S-matrix. The basic vision is that quantum theory is at mathematical level a complex square roots of thermodynamics. What happens in quantum jump was already discussed.

1. U-matrix as has its rows M-matrices , which are matrices between positive and negative energy parts of the zero energy state and correspond to the ordinary S-matrix. M-matrix is a product of a hermitian square root - call it H - of density matrix ρ and universal S-matrix S commuting with H: [S,H]=0. There is infinite number of different Hermitian square roots Hi of density matrices which are assumed to define orthogonal matrices with respect to the inner product defined by the trace: Tr(HiHj)=0. Also the columns of U-matrix are orthogonal. One can interpret square roots of the density matrices as a Lie algebra acting as symmetries of the S-matrix.

2. One can consider generalization of M-matrices so that they would be analogous to the elements of Kac-Moody algebra. These M-matrices would involve all powers of S.

1. The orthogonality with respect to the inner product defined by < A| B> = Tr(AB) requires the conditions Tr(H1H2Sn)=0 for n≠ 0 and Hi are Hermitian matrices appearing as square root of density matrix. H1H2 is hermitian if the commutator [H1,H2] vanishes. It would be natural to assign n:th power of S to the CD for which the scale is n times the CP2 scale.

2. Trace - possibly quantum trace for hyper-finite factors of type II1) is the analog of integration and the formula would be a non-commutative analog of the identity ∈tS1 exp(inφ) dφ=0 and pose an additional condition to the algebra of M-matrices. Since H=H1H2 commutes with S-matrix the trace can be expressed as the sum

i,jhisj(i)= ∑i,j hi(j)sj

of products of correspondence eigenvalues and the simplest condition is that one has either ∑j sj(i)=0 for each i or ∑i hi(j)=0 for each j.

3. It might be that one must restrict M matrices to a Cartan algebra for a given U-matrix and also this choice would be a process analogous to state function reduction. Since density matrix becomes an observable in TGD Universe, this choice could be seen as a direct counterpart for the choice of a maximal number of commuting observables which would be now hermitian square roots of density matrices. Therefore ZEO gives good hopes of reducing basic quantum measurement theory to infinite-dimensional Lie-algebra.

Unitary process and choice of the density matrix

Consider first unitary process followed by the choice of the density matrix.

1. There are two natural state basis for zero energy states. The states of these state basis are prepared at the upper or lower boundary of CD respectively and correspond to various M-matrices MK+ and ML-. U-process is simply a change of state basis meaning a representation of the zero energy state MK+/- in zero energy basis MK-/+ followed by a state preparation to zero energy state M+/-K with the state at second end fixed in turn followed by a reduction to ML-/+ to its time reverse, which is of same type as the initial zero energy state.

The state function reduction to a given M-matrix MK+/- produces a state for the state is superposition of states which are prepared at either lower or upper boundary of CD. It does not yet produce a prepared state on the ordinary sense since it only selects the density matrix.

2. The matrix elements of U-matrix are obtained by acting with the representation of identity matrix in the space of zero energy states as

I= ∑K | K+> < K+|

on the zero energy state | K-> (the action on | K+> is trivial!) and gives

U+KL= Tr(M+KM+L) .

In the similar manner one has

U-KL=(U+†)KL= Tr(M-LM-K) = (U+LK)* .

These matrices are Hermitian conjugates of each other as matrices between states labelled by positive or negative energy states. The interpretation is that two unitary processes are possible and are time reversals of each other. The unitary process produces a new state only if its time arrow is different from that for the initial state. The probabilities for transitions |K+> → |K-> are given by

pmn= |Tr(MK+ ML+)|2.

State function preparation

Consider next the counterpart of the ordinary state preparation process.

1. The ordinary state function process can act either at the upper or lower boundary of CD and its action is thus on positive or negative energy part of the zero energy state. At the lower boundary of CD this process selects one particular prepared states. At the upper boundary it selects one particular final state of the scattering process.

2. Restrict for definiteness the consideration to the lower boundary of CD. Denote also MK by M. At the lower boundary of CD the selection of prepared state - that is preparation process- means the reduction

m+n-M+/-m+n-| m+> | n-> → ∑n-M+/-m+n-| m+> | n-> .

The reduction probability is given by

pm= ∑n- | Mm+n-|2 = ρm+m+ .

For this state the lower boundary carries a prepared state with the quantum numbers of state | m+> . For density matrix which is unit matrix (this option giving pure state might not be possible) one has pm=1.

State function reduction process

The process which is the analog of measuring the final state of the scattering process is also needed and would mean state function reduction at the upper end of CD - to state | n-> now.

1. It is impossible to reduce to arbitrary state | m+> | n-> and the reduction must at the upper end of CD must mean a loss of preparation at the lower end of CD so that one would have kind of time flip-flop!

2. The reduction probability for the process

| m+ >== ∑n-Mm+n-| m+> | n-> → n->= ∑m+Mm+n-| m+> | n->

would be

pmn =| Mmn|2 .

This is just what one would expect. The final outcome would be therefore a state of type | n-> and - this is very important- of the same type as the state from which the process began so that the next process is also of type U+ and one can say that a definite arrow of time prevails.

3. Both the preparation and reduction process involves also a cascade of state function reductions leading to a choice of state basis corresponding to eigenstates of density matrices between subsystems.

Can the arrow of geometric time change?

A highly interesting question is what happens if the first state preparation leading to a state | K+> is followed by a U-process of type U- rather than by the state function reduction process |K+> → |L->. Does this mean that the arrow of geometric time changes? Could this change of the arrow of geometric time take place in living matter? Could processes like molecular self assembly be entropy producing processes but with non-standard arrow of geometric time? Or are they processes in which negentropy increases by the fusion of negentropic parts to larger ones? Could the variability relate to sleep-awake cycle and to the fact that during dreams we are often in our childhood and youth. Old people are often said to return to their childhood. Could this have more than a metaphoric meaning? Could biological death mean return to childhood at the level of conscious experience? I have explained the recent views about the arrow of time here .

For background see chapter Negentropy Maximization Principle.