# p-Adic Length Scale Hypothesis and Dark Matter Hierarchy

Note: Newest contributions are at the top!

 Year 2007

### Are the abundances of heavier elements determined by cold fusion in interstellar medium?

According to the standard model, elements not heavier than Li were created in Big Bang. Heavier elements were produced in stars by nuclear fusion and ended up to the interstellar space in super-nova explosions and were gradually enriched in this process. Lithium problem forces to take this theoretical framework with a grain of salt.

The work of Kervran [1] suggests that cold nuclear reactions are occurring with considerable rates, not only in living matter but also in non-organic matter. Kervran indeed proposes that also the abundances of elements at Earth and planets are to high degree determined by nuclear transmutations and discusses some examples. For instance, new mechanisms for generation of O and Si would change dramatically the existing views about evolution of planets and prebiotic evolution of Earth.

This inspires the question whether elements heavier than Li could be produced in interstellar space by cold nuclear reactions. In the following I consider a model for this. The basic prediction is that the abundances of heavier elements should not depend on time if the interstellar production dominates. The prediction is consistent with the recent experimental findings challenging seriously the standard model.

1. Are heavier nuclei produced in the interstellar space?

TGD based model for cold fusion by plasma electrolysis and using heavy water explains many other anomalies: for instance, H1.5 anomaly of water and Lithium problem of cosmology (the amount of Li is considerably smaller than predicted by Big Bang cosmology and the explanation is that part of it transforms to dark Li with larger value of hbar and present in water). The model allows to understand the surprisingly detailed discoveries of Kervran about nuclear transmutations in living matter (often by bacteria) by possible slight modifications of mechanisms proposed by Kervran.

If this picture is correct, it would have dramatic technological implications. Cold nuclear reactions could provide not only a new energy technology but also a manner to produce artificially various elements, say metals. The treatment of nuclear wastes might be carried out by inducing cold fissions of radioactive heavy nuclei to stable products by allowing them to collide with dark Lithium nuclei in water so that Coulomb wall is absent. Amazingly, there are bacteria which can live in the extremely harsh conditions provided by nuclear reactor were anything biological should die. Perhaps these bacteria carry out this process in their own body.

The model also encourages to consider a simple model for the generation of heavier elements in interstellar medium: what is nice that the basic prediction differentiating this model from standard model is consistent with the recent experimental findings. The assumptions are following.

1. Dark nuclei X(3k, n), that is nuclear strings of form Li(3,n), C(6,n), F(9,n), Mg(12,n), P(15,n), A(18,n), etc..., form as a fusion of Li strings. n=Z,Z+1 is the most plausible value of n. There is also 4He present but as a noble gas it need not play an important role in condensed matter phase (say interstellar dust). The presence of water necessitates that of Li(3,n) if one accepts the proposed model as such.

2. The resulting nuclei are in general stable against spontaneous fission by energy conservation. The binding energy of He(2,2) is however exceptionally high so that alpha decay can occur in dark nuclear reactions between X(3k,n) allowed by the considerable reduction of the Coulomb wall. The induced fissions X(3k,n)→ X(3k-2,n-2)+He(2,2) produces nuclei with atomic number Z mod 3= 1 such as Be(4,5), N(7,7), Ne(10,10), Al(13,14), S(16,16), K(19,20),... Similar nuclear reactions make possible a further alpha decay of Z mod 3=1 nuclei to give nuclei with Z mod 2 such as B(5,6), O(8,8), Na(11,12), Si(14,14), Cl(17,18), Ca(20,20),... so that most stable isotopes of light nuclei could result in these fissions.

3. The dark nuclear fusions of already existing nuclei can create also heavier Fe. Only the gradual decrease of the binding energy per nucleon for nuclei heavier than Fe poses restrictions on this process.

2. The abundances of nuclei in interstellar space should not depend on time

The basic prediction of TGD inspired model is that the abundances of the nuclei in the interstellar space should not depend on time if the rates are so high that equilibrium situation is reached rapidly. The hbar increasing phase transformation of the nuclear space-time sheet determines the time scale in which equilibrium sets on. Standard model makes different prediction: the abundances of the heavier nuclei should gradually increase as the nuclei are repeatedly re-processed in stars and blown out to the interstellar space in super-nova explosion.

Amazingly, there is empirical support for this highly non-trivial prediction [2]. Quite surprisingly, the 25 measured elemental abundances (elements up to Sn(50,70) (tin) and Pb(82,124) (lead)) of a 12 billion years old galaxy turned out to be very nearly the same as those for Sun. For instance, oxygen abundance was 1/3 from that from that estimated for Sun. Standard model would predict that the abundances should be .01-.1 from that for Sun as measured for stars in our galaxy. The conjecture was that there must be some unknown law guaranteing that the distribution of stars of various masses is time independent. The alternative conclusion would be that heavier elements are created mostly in interstellar gas and dust.

3. Could also "ordinary" nuclei consist of protons and negatively charged color bonds?

The model would strongly suggest that also ordinary stable nuclei consist of protons with proton and negatively charged color bond behaving effectively like neutron. Note however that I have also consider the possibility that neutron halo consists of protons connected by negatively charged color bonds to main nucleus. The smaller mass of proton would favor it as a fundamental building block of nucleus and negatively charged color bonds would be a natural manner to minimizes Coulomb energy. The fact that neutron does not suffer a beta decay to proton in nuclear environment provided by stable nuclei would also find an explanation.

1. Ordinary shell model of nucleus would make sense in length scales in which proton plus negatively charged color bond looks like neutron.

2. The strictly nucleonic strong nuclear isospin is not vanishing for the ground state nuclei if all nucleons are protons. This assumption of the nuclear string model is crucial for quantum criticality since it implies that binding energies are not changed in the scaling of hbar if the length of the color bonds is not changed. The quarks of charged color bond however give rise to a compensating strong isospin and color bond plus proton behaves in a good approximation like neutron.

3. Beta decays might pose a problem for this model. The electrons resulting in beta decays of this kind nuclei consisting of protons should come from the beta decay of the d-quark neutralizing negatively charged color bond. The nuclei generated in high energy nuclear reactions would presumably contain genuine neutrons and suffer beta decay in which d quark is nucleonic quark. The question is whether how much the rates for these two kinds of beta decays differ and whether existing facts about beta decays could kill the model.

References

[1] C. L. Kervran (1972), Biological transmutations, and their applications in chemistry, physics, biology, ecology, medicine, nutrition, agriculture, geology, Swan House Publishing Co.

[2] J. Prochaska, J. C. Howk, A. M. Wolfe (2003), The elemental abundance pattern in a galaxy at z = 2.626, Nature 423, 57-59 (2003). See also Distant elements of surprise.

For details see the chapter Nuclear String Hypothesis.

### The work of Kanarev and Mizuno about cold fusion in electrolysis

The article of Kanarev and Mizuno [1] reports findings supporting the occurrence of cold fusion in NaOH and KOH hydrolysis. The situation is different from standard cold fusion where heavy water D2O is used instead of H2O.

1. One can understand the cold fusion reactions reported by Mizuno as nuclear reactions in which part of what I call dark proton string having negatively charged color bonds (essentially a zoomed up variant of ordinary nucleus with large Planck constant) suffers a phase transition to ordinary matter and experiences ordinary strong interactions with the nuclei at the cathode. In the simplest model the final state would contain only ordinary nuclear matter.

2. Negatively charged color bonds could correspond to pairs of quark and antiquark or to pairs of color octet electron and antineutrino having mass of order 1 MeV. Also quantum superpositions of quark and lepton pairs can be considered. Note that TGD predicts that leptons can have colored excitations and production of neutral leptopions formed from them explains the anomalous production of electron-positron pairs associated with heavy ion collisions near Coulomb wall.

3. The so called H1.5O anomaly of [2] can be understood if 1/4 of protons of water forms dark lithium nuclei or heavier nuclei formed as sequences of these just as ordinary nuclei are constructed as sequences of 4He and lighter nuclei in nuclear string model. The results force to consider the possibility that nuclear isotopes unstable as ordinary matter can be stable dark matter. In the formation of these sequence the negative electronic charge of hydrogen atoms goes naturally to the color bonds. The basic interaction would generate charge quark pair (or a pair of color octet electron and antineutrino or a quantum superposition of quark and lepton pair) plus color octet neutrino. By lepton number conservation each electron pair would give rise to a color singlet particle formed by two color octet neutrinos and defining the analog of leptobaryon. Di-neutrino would leave the system unless unless it has large enough mass. Neutrino mass scale .1 eV gives for the Compton time scale the estimate .1 attoseconds which would suggest that di-neutrinos do not leak out. Recall that attosecond is the time scale in which H1.5O behavior prevails.

4. The data of Mizuno requires that the protonic strings have net charge of three units and by em stability have neutral color bonds at ends and negatively charged bonds in between. Dark variants of Li isotopes would be in question. The so called lithium problem of cosmology (the observed abundance of lithium is by a factor 2.5 lower than predicted by standard cosmology [3]) can be resolved if lithium nuclei transform partially to dark lithium nuclei.

5. Biologically important ions K+, Cl-, Ca++ appear in cathode in plasma electrolysis and would be produced in cold nuclear reactions of dark Li nuclei of water and Na+. This suggests that cold nuclear reactions occur also in living cell and produce metabolic energy. There exists evidence for nuclear transmutations in living matter [4]. In particular, Kervran claims that it is very difficult to understand where the Ca in egg shells comes from. Cell membrane would provide the extremely strong electric field perhaps creating the plasma needed for cold nuclear reactions somewhat like in plasma electrolysis.

6. The model is consistent with the model for cold fusion of deuterium nuclei [5]. In this case nuclear reaction would however occur on the "dark side". The absence of He from reaction products can be understood if the D nuclei in Pd target are transformed by weak interactions between D and Pd nuclei to their neutral counterparts analogous to di-neutrons. Neutral color bond could transform to negatively charged one by the exchange of W+ boson of a scaled version of weak interactions with the range of interaction given by atomic length scale. Also exchange of charge ρ meson of scaled down variant of QCD could affect the same thing. This interaction might be at work also for ordinary nuclei in condensed matter and ordinary nuclei could contain protons and negatively charged color bonds neutrons. The difference in mass would be very small since the quarks have mass of order MeV.

The model leads also to a new understanding of ordinary [6] and plasma electrolysis of water [7], and allows to identify hydrogen bond as dark OH bond.

1. The model for plasma hydrolysis relies on the observation of Kanarev that the energy of OH bonds in water is reduce from about 8 eV to a value around .5 eV which corresponds to the fundamental metabolic energy quantum resulting in dropping of proton from atomic k=137 space-time sheet and also to a typical energy of hydrogen bond. This suggests the possibility that hydrogen bond is actually a dark OH bond. From 1/hbar-proportionality of perturbative contribution of Coulomb energy for bond one obtains that dark bond energy scales as 1/hbar so that dark OH bond could be in question. In Kanarev's plasma electrolysis the temperature is between .5-1 eV and thermal radiation could induce producing 2H2+O2 by the splitting of the dark OH bonds. One could have hbar=24×hbar0. Also in the ordinary electrolysis the OH bond energy is reduced by a factor of order 2 which suggest that in this case one has hbar=2×hbar0.

2. The transformation of OH bonds to their dark counterparts requires energy and this energy would come from dark nuclear reactions. The liberated (dark) photons could kick protons from (dark) atomic space-time sheets to smaller space-time sheets and remote metabolism would provide the energy for the transformation of OH bond. The existence of dark hydrogen bonds with energies differing by integer scaling is predicted and powers of 2 are favored. It is known that at least two hydrogen bonds for which energies differ by factor 2 exist in ice [8].

3. In plasma electrolysis the increase of the input voltage implies a mysterious reduction of the electron current with the simultaneous increase of the size of the plasma region near the cathode. The electronic charge must go somewhere and the natural place are negative color bonds connecting dark protons to dark lithium isotopes. The energy liberated in cold nuclear reactions would create plasma by ionizing hydrogen atoms which in turn would generate more dark protons fused to dark lithium isotopes and increase the rate of energy production by dark nuclear reactions. This means a positive feedback loop analogous to that occurring in ordinary nuclear reactions.

The model explains also the burning of salt water discovered by Kanzius [9] as a special case of plasma electrolysis since the mechanism does not necessitate the presence of either anode, cathode, or electron current.

1. The temperature of the flame is estimated to be 1500 C. The temperature in water could be considerably higher and 1500 C defines a very conservative estimate. Hydrolysis would be preceded by the transformation of HO bonds to hydrogen bonds and dark nuclear reactions would provide the energy. Again positive feedback loop should be created. Dark radio wave photons would transform to microwave photons and together with nuclear energy production would keep the water at the temperature corresponding to the energy of.017 eV (for conservative estimate T=.17 eV in water) so that dark OH bonds would break down thermally.

2. For T=1500 C the energy of dark OH bond (hydrogen bond) would be very low, around .04 eV for hbar=180×hbar0 and nominal value 8 eV OH bond energy (this is not far from the energy assignable to the membrane resting potential) from the condition that dark radio wave frequency 13.65 MHz corresponds to the microwave frequency needed to heat water by the rotational excitation of water molecules.

3. Visible light would result as dark protons drop from k=165 space-time sheet to any larger space-time sheet or from k=164 to k=165 space-time sheet (2 eV radiation). 2 eV photons would explain the yellow color in the flame (not red as I have claimed earlier). The red light present in Kanarev's experiment can be also understood since there is entire series E(n)= E× (1-2-n) of energies corresponding to transitions to space-time sheets with increasing p-adic length scale. For k=165 n<6 corresponds to red or infrared light and n>5 to yellow light.

4. There is no detectable or perceivable effect on hand by the radio wave radiation. The explanation would be that dark hydrogen bonds in cellular water correspond to a different values of Planck constant. One should of course check whether the effect is really absent.

For more details see the chapter Nuclear String Hypothesis.

References

[1] Cold fusion by plasma electrolysis of water, Ph. M. Kanarev and T. Mizuno (2002),
http://www.guns.connect.fi/innoplaza/energy/story/Kanarev/coldfusion/.

[2] M. Chaplin (2005), Water Structure and Behavior,
http://www.lsbu.ac.uk/water../index.html.
For 41 anomalies see http://www.lsbu.ac.uk/water/anmlies.html.
For the icosahedral clustering see http://www.lsbu.ac.uk/water/clusters.html.
J. K. Borchardt(2003), The chemical formula H2O - a misnomer, The Alchemist 8 Aug (2003).
R. A. Cowley (2004), Neutron-scattering experiments and quantum entanglement, Physica B 350 (2004) 243-245.
R. Moreh, R. C. Block, Y. Danon, and M. Neumann (2005), Search for anomalous scattering of keV neutrons from H2O-D2O mixtures, Phys. Rev. Lett. 94, 185301.

[3] C. Charbonnel and F. Primas (2005), The lithium content of the Galactic Halo stars.

[4]C. L. Kervran (1972), Biological transmutations, and their applications in chemistry, physics, biology, ecology, medicine, nutrition, agriculture, geology, Swan House Publishing Co.
P. Tompkins and C. Bird (1973), The secret life of plants, Harper and Row, New York.

[7] P. Kanarev (2002), Water is New Source of Energy, Krasnodar.

[8] J-C. Li and D.K. Ross (1993), Evidence of Two Kinds of Hydrogen Bonds in Ices. J-C. Li and D.K. Ross, Nature, 365, 327-329.

### Ultra high energy cosmic rays as super-canonical quanta?

Lubos tells about the announcement of Pierre Auger Collaboration relating to ultrahigh energy cosmic rays. I glue below a popular summary of the findings.

Scientists of the Pierre Auger Collaboration announced today (8 Nov. 2007) that active galactic nuclei are the most likely candidate for the source of the highest-energy cosmic rays that hit Earth. Using the Pierre Auger Observatory in Argentina, the largest cosmic-ray observatory in the world, a team of scientists from 17 countries found that the sources of the highest-energy p"../articles/ are not distributed uniformly across the sky. Instead, the Auger results link the origins of these mysterious p"../articles/ to the locations of nearby galaxies that have active nuclei in their centers. The results appear in the Nov. 9 issue of the journal Science.

Active Galactic Nuclei (AGN) are thought to be powered by supermassive black holes that are devouring large amounts of matter. They have long been considered sites where high-energy particle production might take place. They swallow gas, dust and other matter from their host galaxies and spew out p"../articles/ and energy. While most galaxies have black holes at their center, only a fraction of all galaxies have an AGN. The exact mechanism of how AGNs can accelerate p"../articles/ to energies 100 million times higher than the most powerful particle accelerator on Earth is still a mystery.

About million cosmic ray events have been recorded and 80 of them correspond to p"../articles/ with energy above the so called GKZ bound, which is .54 × 1011 GeV. Electromagnetically interacting p"../articles/ with these energies from distant galaxies should not be able to reach Earth. This would be due to the scattering from the photons of the microwave background. About 20 p"../articles/ of this kind however comes from the direction of distant active galactic nuclei and the probability that this is an accident is about 1 per cent. P"../articles/ having only strong interactions would be in question. The problem is that this kind of p"../articles/ are not predicted by the standard model (gluons are confined).

1. What does TGD say about the finding?

TGD provides an explanation for the new kind of p"../articles/.

1. The original TGD based model for the galactic nucleus is as a highly tangled cosmic string (in TGD sense of course, see this). Much later it became clear that also TGD based model for black-hole is as this kind of string like object near Hagedorn temperature (see this and this). Ultrahigh energy p"../articles/ could result as decay products of a decaying split cosmic string as an extremely energetic galactic jet. Kind of cosmic fire cracker would be in question. Originally I proposed this decay as an explanation for the gamma ray bursts. It seems that gamma ray bursts however come from thickened cosmic strings having weaker magnetic field and much lower energy density (see this).

2. TGD predicts p"../articles/ having only strong interactions (see this). I have christened these p"../articles/ super-canonical quanta. These p"../articles/ correspond to the vibrational degrees of freedom of partonic 2-surface and are not visible at the quantum field theory limit for which partonic 2-surfaces become points.

2. What super-canonical quanta are?

Super-canonical quanta are created by the elements of super-canonical algebra, which creates quantum states besides the super Kac-Moody algebra present also in super string model. Both algebras relate closely to the conformal invariance of light-like 3-surfaces.

1. The elements of super-canonical algebra are in one-one correspondence with the Hamiltonians generating symplectic transformations of δM4+× CP2. Note that the 3-D light-cone boundary is metrically 2-dimensional and possesses degenerate symplectic and Kähler structures so that one can indeed speak about symplectic (canonical) transformations.

2. This algebra is the analog of Kac-Moody algebra with finite-dimensional Lie group replaced with the infinite-dimensional group of symplectic transformations (see this). This should give an idea about how gigantic a symmetry is in question. This is as it should be since these symmetries act as the largest possible symmetry group for the Kähler geometry of the world of classical worlds (WCW) consisting of light-like 3-surfaces in 8-D imbedding space for given values of zero modes (labelling the spaces in the union of infinite-dimensional symmetric spaces). This implies that for the given values of zero modes all points of WCW are metrically equivalent: a generalization of the perfect cosmological principle making theory calculable and guaranteing that WCW metric exists mathematically. Super-canonical generators correspond to gamma matrices of WCW and have the quantum numbers of right handed neutrino (no electro-weak interactions). Note that a geometrization of fermionic statistics is achieved.

3. The Hamiltonians and super-Hamiltonians have only color and angular momentum quantum numbers and no electro-weak quantum numbers so that electro-weak interactions are absent. Super-canonical quanta however interact strongly.

3. Also hadrons contain super-canonical quanta

One can say that TGD based model for hadron is at space-time level kind of combination of QCD and old fashioned string model forgotten when QCD came in fashion and then transformed to the highly unsuccessful but equally fashionable theory of everything.

1. At quantum level the energy corresponding to string tension explaining about 70 per cent of proton mass corresponds to super-canonical quanta (see this). Supercanonical quanta allow to understand hadron masses with a precision better than 1 per cent.

2. Super-canonical degrees of freedom allow also to solve spin puzzle of the proton: the average quark spin would be zero since same net angular momentum of hadron can be obtained by coupling quarks of opposite spin with angular momentum eigen states with different projection to the direction of quantization axis.

3. If one considers proton without valence quarks and gluons, one obtains a boson with mass very nearly equal to that of proton (for proton super-canonical binding energy compensates quark masses with high precision). These kind of pseudo protons might be created in high energy collisions when the space-time sheets carrying valence quarks and super-canonical space-time sheet separate from each other. Super-canonical quanta might be produced in accelerators in this manner and there is actually experimental support for this from Hera (see this).

4. The exotic p"../articles/ could correspond to some p-adic copy of hadron physics predicted by TGD and have very large mass smaller however than the energy. Mersenne primes Mn= 2n-1 define excellent candidates for these copies. Ordinary hadrons correspond to M107. The protons of M31 hadron physics would have the mass of proton scaled up by a factor 2(107-31)/2=238≈ 2.6×1011. Energy should be above 2.6 × 1011 GeV and is above .54 × 1011 GeV for the p"../articles/ above the GKZ limit. Even super-canonical quanta associated with proton of this kind could be in question. Note that CP2 mass corresponds roughly to about 1014 proton masses.

5. Ideal blackholes would be very long highly tangled string like objects, scaled up hadrons, containing only super-canonical quanta. Hence it would not be surprising if they would emit super-canonical quanta. The transformation of supernovas to neutron stars and possibly blackholes would involve the fusion of hadronic strings to longer strings and eventual annihilation and evaporation of the ordinary matter so that only super-canonical matter would remain eventually. A wide variety of intermediate states with different values of string tension would be possible and the ultimate blackhole would correspond to highly tangled cosmic string. Dark matter would be in question in the sense that Planck constant could be very large.

For more details see the chapter p-Adic Particle Massivation: New Physics.

### Does Higgs boson appear with two p-adic mass scales?

The p-adic mass scale of quarks is in TGD Universe dynamical and several mass scales appear already in low energy hadron mass formulas. Also neutrinos seem to correspond to several mass scales and the large variation of electron's effective mass in condensed matter might be also partially due to the variation of p-adic mass scale. The values of Higgs mass deduced from high precision electro-weak observables converges to two values differing by order of magnitude (see this and this) and this raises the question whether also Higgs mass scale could vary and depend on experimental situation.

1. Higgs mass in standard model

In standard model Higgs and W boson masses are given by

mH2= 2v2λ=μ2λ3,

mW2= g2v2/4= [e2/8sin2W)] μ2λ2 .

This gives

λ= [π/2αemsin2W)] (mH/mW)2 .

In standard model one cannot predict the value of mH.

2. Higgs mass in TGD

In TGD framework one can try to understand Higgs mass from p-adic thermodynamics as resulting via the same mechanism as fermion masses so that the value of the parameter λ would follow as a prediction.

One must assume that p-adic temperature equals to Tp=1. The natural assumption is that Higgs can be regarded as superposition of pairs of fermion and anti-fermion at opposite throats of wormhole contact. With these assumptions the thermal expectation of the Higgs conformal weight is just the sum of contributions from both throats and two times the average of the conformal weight over that for quarks and leptons:

sH= 2× <s> = 2× [∑q sq +∑L sL]/(Nq+NL)

= 2∑g=02 smod(g)/3+ (sL+sνL+ sU+sD)/2

= 26+5+4+5+8/2= 37 .

1. The first term - two times the average of the genus dependent modular contribution to the conformal weight - equals to 26, and comes from modular degrees of freedom and does not depend on the charge of fermion.

2. The contribution of p-adic thermodynamics for super-conformal generators gives same contribution for all fermion families and depends on the em charge of fermion. The values of thermal conformal weights deduced earlier have been used. Note that only the value sνL=4 (also sνL=5 could be considered) is possible if one requires that the conformal weight is integer. If the standard form of the canonical identification mapping p-adics to reals is used, this must be the case since otherwise real mass would be super-heavy.

3. What p-adic mass scale Higgs corresponds?

The first guess would be that the p-adic length scale associated with Higgs boson is M89. Second option is p≈ 2k, k=97 (restricting k to be prime). If one allows k to be non-prime (these values of k are also realized) one can consider also k=91=7×13. By scaling from the expression for the electron mass, one obtains the estimates

mH(89)≈ (37/5)1/2× 219me≈ 727.3 GeV ,
mH(91)≈ (37/5)1/2× 217me≈ 363.5 GeV,
mH(97)≈ (37/5)1/2× 215me≈ 45.5 GeV.

A couple of comments are in order.

1. From the article of Giudice one learns that the latest estimates for Higgs mass give two widely different values, namely mH= 3133-19 GeV and mH=420420-190 GeV. Since the p-adic mass scale of both neutrinos and quarks and possibly even electron can vary in TGD framework, one cannot avoid the question whether - depending on experimental situation- Higgs could appear in two different mass scales corresponding to k=91 and 97.

2. The low value of mH(97) might be consistent with experimental facts since the couplings of fermions to Higgs can in TGD framework be weaker than in standard model because Higgs expectation does not contribute to fermion masses.

4. Unitary bound and Higgs mass

The value of λ is given in the three cases by

λ(89)≈ 4.41 ,
λ(91)≈ 1.10,
λ(97)= .2757.

Unitarity would thus favor k=97 and k=91 also favored by the high precision data and k=91 is just at the unitarity bound λ=1) (here I am perhaps naive!). A possible interpretation is that for M89 Higgs mass forces λ to break unitarity bound and that this corresponds to the emergence of M89 copy of hadron physics.

For more details see the chapter Massless p"../articles/ and particle massivation.

### Connes tensor product and perturbative expansion in terms of generalized braid diagrams

Many steps of progress have occurred in TGD lately.

1. In a given measurement resolution characterized by the inclusion of HFFs of type II1 Connes tensor product defines an almost universal M-matrix apart from the non-uniqueness due to the facts that one has a direct sum of hyper-finite factors of type II1 (sum over conformal weights at least) and the fact that the included algebra defining the measurement resolution can be represented in a reducible manner. The S-matrices associated with irreducible factors would be unique in a given measurement resolution and the non-uniqueness would make possible non-trivial density matrices and thermodynamics.

2. Higgs vacuum expectation is proportional to the generalized position dependent eigenvalue of the modified Dirac operator and its minima define naturally number theoretical braids as orbits for the minima of the universal Higgs potential: fusion and decay of braid strands emerge naturally. Thus the old speculation about a generalization of braid diagrams to Feynman diagram likes objects, which I already began to think to be too crazy to be true, finds a very natural realization.

In the previous posting I explained how generalized braid diagrams emerge naturally as orbits of the minima of Higgs defined as a generalized eigenvalue of the modified Dirac operator.

The association of generalized braid diagrams to incoming and outgoing 3-D partonic legs and possibly also vertices of the generalized Feynman diagrams forces to ask whether the generalized braid diagrams could give rise to a counterpart of perturbation theoretical formalism via the functional integral over configuration space degrees of freedom.

The question is how the functional integral over configuration space degrees of freedom relates to the generalized braid diagrams. The basic conjecture motivated also number theoretically is that radiative corrections in this sense sum up to zero for critical values of Kähler coupling strength and Kähler function codes radiative corrections to classical physics via the dependence of the scale of M4 metric on Planck constant. Cancellation occurs only for critical values of Kähler coupling strength αK: for general values of αK cancellation would require separate vanishing of each term in the sum and does not occur.

The natural guess is that finite measurement resolution in the sense of Connes tensor product can be described as a cutoff to the number of generalized braid diagrams. Suppose that the cutoff due to the finite measurement resolution can be described in terms of inclusions and M-matrix can be expressed as a Connes tensor product. Suppose that the improvement of the measurement resolution means the introduction of zero energy states and corresponding light-like 3-surfaces in shorter time scales bringing in increasingly complex 3-topologies.

This would mean following.

1. One would not have perturbation theory around a given maximum of Kähler function but as a sum over increasingly complex maxima of Kähler function. Radiative corrections in the sense of perturbative functional integral around a given maximum would vanish (so that the expansion in terms of braid topologies would not make sense around single maximum). Radiative corrections would not vanish in the sense of a sum over 3-topologies obtained by adding radiative corrections as zero energy states in shorter time scale.

2. Connes tensor product with a given measurement resolution would correspond to a restriction on the number of maxima of Kähler function labelled by the braid diagrams. For zero energy states in a given time scale the maxima of Kähler function could be assigned to braids of minimal complexity with braid vertices interpreted in terms of an addition of radiative corrections. Hence a connection with QFT type Feyman diagram expansion would be obtained and the Connes tensor product would have a practical computational realization.

3. The cutoff in the number of topologies (maxima of Kähler function contributing in a given resolution defining Connes tensor product) would be always finite in accordance with the algebraic universality.

4. The time scale resolution defined by the temporal distance between the tips of the causal diamond defined by the future and past light-cones applies to the addition of zero energy sub-states and one obtains a direct connection with p-adic length scale evolution of coupling constants since the time scales in question naturally come as negative powers of two. More precisely, p-adic p-adic primes near power of two are very natural since the coupling constant evolution comes in powers of two of fundamental 2-adic length scale.

There are still some questions. Radiative corrections around given 3-topology vanish. Could radiative corrections sum up to zero in an ideal measurement resolution also in 2-D sense so that the initial and final partonic 2-surfaces associated with a partonic 3-surface of minimal complexity would determine the outcome completely? Could the 3-surface of minimal complexity correspond to a trivial diagram so that free theory would result in accordance with asymptotic freedom as measurement resolution becomes ideal?

The answer to these questions seems to be 'No'. In the p-adic sense the ideal limit would correspond to the limit p→ 0 and since only p→ 2 is possible in the discrete length scale evolution defined by primes, the limit is not a free theory. This conforms with the view that CP2 length scale defines the ultimate UV cutoff.

For more details see the chapter Massless P"../articles/ and Particle Massivation.

### Number theoretic braids and global view about anti-commutations of induced spinor fields

The anti-commutations of induced spinor fields are reasonably well understood locally. The basic objects are 3-dimensional light-like 3-surfaces. These surfaces can be however seen as random light-like orbits of partonic 2-surfaces taking which would thus seem to take the role of fundamental dynamical objects. Conformal invariance in turn seems to make the 2-D partons 1-D objects and number theoretical braids in turn discretizes strings. And it also seems that the strands of number theoretic braid can in turn be discretized by considering the minima of Higgs potential in 3-D sense.

Somehow these apparently contradictory views should be unifiable in a more global view about the situation allowing to understand the reduction of effective dimension of the system as one goes to short scales. The notions of measurement resolution and number theoretic braid indeed provide the needed insights in this respect.

1. Anti-commutations of the induced spinor fields and number theoretical braids

The understanding of the number theoretic braids in terms of Higgs minima and maxima allows to gain a global view about anti-commutations. The coordinate patches inside which Higgs modulus is monotonically increasing function define a division of partonic 2-surfaces X2t= X3l\intersection δ M4+/-,t to 2-D patches as a function of time coordinate of X3l as light-cone boundary is shifted in preferred time direction defined by the quantum critical sub-manifold M2× CP2. This induces similar division of the light-like 3-surfaces X3l to 3-D patches and there is a close analogy with the dynamics of ordinary 2-D landscape.

In both 2-D and 3-D case one can ask what happens at the common boundaries of the patches. Do the induced spinor fields associated with different patches anti-commute so that they would represent independent dynamical degrees of freedom? This seems to be a natural assumption both in 2-D and 3-D case and correspond to the idea that the basic objects are 2- resp. 3-dimensional in the resolution considered but this in a discretized sense due to finite measurement resolution, which is coded by the patch structure of X3l. A dimensional hierarchy results with the effective dimension of the basic objects increasing as the resolution scale increases when one proceeds from braids to the level of X3l.

If the induced spinor fields associated with different patches anti-commute, patches indeed define independent fermionic degrees of freedom at braid points and one has effective 2-dimensionality in discrete sense. In this picture the fundamental stringy curves for X2t correspond to the boundaries of 2-D patches and anti-commutation relations for the induced spinor fields can be formulated at these curves. Formally the conformal time evolution scaled down the boundaries of these patches. If anti-commutativity holds true at the boundaries of patches for spinor fields of neighboring patches, the patches would indeed represent independent degrees of freedom at stringy level.

The cutoff in transversal degrees of freedom for the induced spinor fields means cutoff n≤ nmax for the conformal weight assignable to the holomorphic dependence of the induced spinor field on the complex coordinate. The dropping of higher conformal weights should imply the loss of the anti-commutativity of the induced spinor fields and its conjugate except at the points of the number theoretical braid. Thus the number theoretic braid should code for the value of nmax: the naive expectation is that for a given stringy curve the number of braid points equals to nmax.

2. The decomposition into 3-D patches and QFT description of particle reactions at the level of number theoretic braids

What is the physical meaning of the decomposition of 3-D light-like surface to patches? It would be very desirable to keep the picture in which number theoretic braid connects the incoming positive/negative energy state to the partonic 2-surfaces defining reaction vertices. This is not obvious if X3l decomposes into causally independent patches. One can however argue that although each patch can define its own fermion state it has a vanishing net quantum numbers in zero energy ontology, and can be interpreted as an intermediate virtual state for the evolution of incoming/outgoing partonic state.

Another problem - actually only apparent problem -has been whether it is possible to have a generalization of the braid dynamics able to describe particle reactions in terms of the fusion and decay of braid strands. For some strange reason I had not realized that number theoretic braids naturally allow fusion and decay. Indeed, cusp catastrophe is a canonical representation for the fusion process: cusp region contains two minima (plus maximum between them) and the complement of cusp region single minimum. The crucial control parameter of cusp catastrophe corresponds to the time parameter of X3l. More concretely, two valleys with a mountain between them fuse to form a single valley as the two real roots of a polynomial become complex conjugate roots. The continuation of light-like surface to slicing of X4 to light-like 3-surfaces would give the full cusp catastrophe.

In the catastrophe theoretic setting the time parameter of X3l appears as a control variable on which the roots of the polynomial equation defining minimum of Higgs depend: the dependence would be given by a rational function with rational coefficients.

This picture means that particle reactions occur at several levels which brings in mind a kind of universal mimicry inspired by Universe as a Universal Computer hypothesis. Particle reactions in QFT sense correspond to the reactions for the number theoretic braids inside partons. This level seems to be the simplest one to describe mathematically. At parton level particle reactions correspond to generalized Feynman diagrams obtained by gluing partonic 3-surfaces along their ends at vertices. Particle reactions are realized also at the level of 4-D space-time surfaces. One might hope that this multiple realization could code the dynamics already at the simple level of single partonic 3-surface.

3. About 3-D minima of Higgs potential

The dominating contribution to the modulus of the Higgs field comes from δ M4+/- distance to the axis R+ defining quantization axis. Hence in scales much larger than CP2 size the geometric picture is quite simple. The orbit for the 2-D minimum of Higgs corresponds to a particle moving in the vicinity of R+ and minimal distances from R+ would certainly give a contribution to the Dirac determinant. Of course also the motion in CP2 degrees of freedom can generate local minima and if this motion is very complex, one expects large number of minima with almost same modulus of eigenvalues coding a lot of information about X3l.

It would seem that only the most essential information about surface is coded: the knowledge of minima and maxima of height function indeed provides the most important general coordinate invariant information about landscape. In the rational category where X3l can be characterized by a finite set of rational numbers, this might be enough to deduce the representation of the surface.

What if the situation is stationary in the sense that the minimum value of Higgs remains constant for some time interval? Formally the Dirac determinant would become a continuous product having an infinite value. This can be avoided by assuming that the contribution of a continuous range with fixed value of Higgs minimum is given by the contribution of its initial point: this is natural if one thinks the situation information theoretically. Physical intuition suggests that the minima remain constant for the maxima of Kähler function so that the initial partonic 2-surface would determine the entire contribution to the Dirac determinant.

For more details see the chapter Massless states and Particle Massivation.

### Fractional Quantum Hall effect in TGD framework

The generalization of the imbedding space discussed in previous posting allows to understand fractional quantum Hall effect (see this and this).

The formula for the quantized Hall conductance is given by

σ= ν× e2/h,ν=m/n.

Series of fractions in ν=1/3, 2/5 3/7, 4/9, 5/11, 6/13, 7/15..., 2/3, 3/5, 4/7 5/9, 6/11, 7/13..., 5/3, 8/5, 11/7, 14/9... 4/3 7/5, 10/7, 13/9... , 1/5, 2/9, 3/13..., 2/7 3/11..., 1/7.. with odd denominator have bee observed as are also ν=1/2 and ν=5/2 state with even denominator.

The model of Laughlin [Laughlin] cannot explain all aspects of FQHE. The best existing model proposed originally by Jain [Jain] is based on composite fermions resulting as bound states of electron and even number of magnetic flux quanta. Electrons remain integer charged but due to the effective magnetic field electrons appear to have fractional charges. Composite fermion picture predicts all the observed fractions and also their relative intensities and the order in which they appear as the quality of sample improves.

I have considered earlier a possible TGD based model of FQHE not involving hierarchy of Planck constants. The generalization of the notion of imbedding space suggests the interpretation of these states in terms of fractionized charge and electron number.

1. The easiest manner to understand the observed fractions is by assuming that both M4 an CP2 correspond to covering spaces so that both spin and electric charge and fermion number are quantized. With this assumption the expression for the Planck constant becomes hbar/hbar0 =nb/na and charge and spin units are equal to 1/nb and 1/na respectively. This gives ν =nna/nb2. The values n=2,3,5,7,.. are observed. Planck constant can have arbitrarily large values. There are general arguments stating that also spin is fractionized in FQHE and for na=knb required by the observed values of ν charge fractionization occurs in units of k/nb and forces also spin fractionization. For factor space option in M4 degrees of freedom one would have ν= n/nanb2.

2. The appearance of nb=2 would suggest that also Z2 appears as the homotopy group of the covering space: filling fraction 1/2 corresponds in the composite fermion model and also experimentally to the limit of zero magnetic fiel [Jain]. Also ν=5/2 has been observed.

3. A possible problematic aspect of the TGD based model is the experimental absence of even values of nb except nb=2. A possible explanation is that by some symmetry condition possibly related to fermionic statistics kn/nb must reduce to a rational with an odd denominator for nb>2. In other words, one has k propto 2r, where 2r the largest power of 2 divisor of nb smaller than nb.

4. Large values of nb emerge as B increases. This can be understood from flux quantization. One has eBS= nhbar= n(nb/na)hbar0. The interpretation is that each of the nb sheets contributes n/na units to the flux. As nb increases also the flux increases for a fixed value of na and area S: note that magnetic field strength remains more or less constant so that kind of saturation effect for magnetic field strength would be in question. For na=knb one obtains eBS/hbar0= n/k so that a fractionization of magnetic flux results and each sheet contributes 1/knb units to the flux. ν=1/2 correspond to k=1,nb=2 and to a non-vanishing magnetic flux unlike in the case of composite fermion model.

5. The understanding of the thermal stability is not trivial. The original FQHE was observed in 80 mK temperature corresponding roughly to a thermal energy of T≈ 10-5 eV. For graphene the effect is observed at room temperature. Cyclotron energy for electron is (from fe= 6× 105 Hz at B=.2 Gauss) of order thermal energy at room temperature in a magnetic field varying in the range 1-10 Tesla. This raises the question why the original FQHE requires so low a temperature? The magnetic energy of a flux tube of length L is by flux quantization roughly e2B2S≈ Ec(e)meL(hbar0=c=1) and exceeds cyclotron energy roughly by factor L/Le, Le electron Compton length so that thermal stability of magnetic flux quanta is not the explanation.

A possible explanation is that since FQHE involves several values of Planck constant, it is quantum critical phenomenon and is characterized by a critical temperature. The differences of the energies associated with the phase with ordinary Planck constant and phases with different Planck constant would characterize the transition temperature. Saturation of magnetic field strength would be energetically favored.

References

[Laughlin] R. B. Laughlin (1983), Phys. Rev. Lett. 50, 1395.
[Jain] J. K. Jain (1989), Phys. Rev. Lett. 63, 199.

For more details see the chapter Dark Nuclear Physics and Condensed Matter.

### Could one demonstrate the existence of large Planck constant photons using ordinary camera or even bare eyes?

If ordinary light sources generate also dark photons with same energy but with scaled up wavelength, this might have effects detectable with camera and even with bare eyes. In the following I consider in a rather light-hearted and speculative spirit two possible effects of this kind appearing in both visual perception and in photos. For crackpotters possibly present in the audience I want to make clear that I love to play with ideas to see whether they work or not, and that I am ready to accept some convincing mundane explanation of these effects and I would be happy to hear about this kind of explanations. I was not able to find any such explanation from Wikipedia using words like camera, digital camera, lense, aberrations..

Why light from an intense light source seems to decompose into rays?

If one also assumes that ordinary radiation fields decompose in TGD Universe into topological light rays ("massless extremals", MEs) even stronger predictions follow. If Planck constant equals to hbar= q×hbar0, q=na/nb, MEs should possess Zna as an exact discrete symmetry group acting as rotations along the direction of propagation for the induced gauge fields inside ME.

The structure of MEs should somewhat realize this symmetry and one possibility is that MEs has a wheel like structure decomposing into radial spokes with angular distance Δφ= 2π/na related by the symmetries in question. This brings strongly in mind phenomenon which everyone can observe anytime: the light from a bright source decomposes into radial rays as if one were seeing the profile of the light rays emitted in a plane orthogonal to the line connecting eye and the light source. The effect is especially strong if eyes are stirred.

Could this apparent decomposition to light rays reflect directly the structure of dark MEs and could one deduce the value of na by just counting the number of rays in camera picture, where the phenomenon turned to be also visible? Note that the size of these wheel like MEs would be macroscopic and diffractive effects do not seem to be involved. The simplest assumption is that most of photons giving rise to the wheel like appearance are transformed to ordinary photons before their detection.

The discussions about this led to a little experimentation with camera at the summer cottage of my friend Samppa Pentikäinen, quite a magician in technical affairs. When I mentioned the decomposition of light from an intense light source to rays at the level of visual percept and wondered whether the same occurs also in camera, Samppa decided to take photos with a digi camera directed to Sun. The effect occurred also in this case and might correspond to decomposition to MEs with various values of na but with same quantization axis so that the effect is not smoothed out.

What was interesting was the presence of some stronger almost vertical "rays" located symmetrically near the vertical axis of the camera. The shutter mechanism determining the exposure time is based on the opening of the first shutter followed by closing a second shutter after the exposure time so that every point of sensor receives input for equally long time. The area of the region determining input is bounded by a vertical line. If macroscopic MEs are involved, the contribution of vertical rays is either nothing or all unlike that of other rays and this might somehow explain why their contribution is enhanced.

Addition: I learned from Samppa that the shutter mechanism is un-necessary in digi cameras since the time for the reset of sensors is what matters. Something in the geometry of the camera or in the reset mechanism must select vertical direction in a preferred position. For instance, the outer "aperture" of the camera had the geometry of a flattened square.

Anomalous diffraction of dark photons

Second prediction is the possibility of diffractive effects in length scales where they should not occur. A good example is the diffraction of light coming from a small aperature of radius d. The diffraction pattern is determined by the Bessel function

J1(x), x=kdsin(θ), k= 2π/λ.

There is a strong light spot in the center and light rings around whose radii increase in size as the distance of the screen from the aperture increases. Dark rings correspond to the zeros of J1(x) at x=xn and the following scaling law for the nodes holds true

sin(θn)= xnλ/2πd.

For very small wavelengths the central spot is almost pointlike and contains most light intensity.

If photons of visible light correspond to large Planck constant hbar= q× hbar0 transformed to ordinary photons in the detector (say camera film or eye), their wavelength is scaled by q and one has

sin(θn)→ q× sin(θn)

The size of the diffraction pattern for visible light is scaled up by q.

This effect might make it possible to detect dark photons with energies of visible photons and possibly present in the ordinary light.

1. What is needed is an intense light source and Sun is an excellent candidate in this respect. Dark photon beam is also needed and n dark photons with a given visible wavelength λ could result when dark photon with hbar= n×q×hbar0 decays to n dark photons with same wavelength but smaller Planck constant hbar= q×hbar0. If this beam enters the camera or eye one has a beam of n dark photons which forms a diffraction pattern producing camera picture in the decoherence to ordinary photons.

2. In the case of an aperture with a geometry of a circular hole, the first dark ring for ordinary visible photons would be at sin(θ)≈ (π/36)λ/d. For a distance of r=2 cm between the sensor plane ("film") and effective circular hole this would mean radius of R ≈ rsin(θ)≈ 1.7 micrometers for micron wavelegnth. The actual size of spots is of order R≈ 1 mm so that the value of q would be around 1000: q=210 and q=211 belong to the favored values for q.

3. One can imagine also an alternative situation. If photons responsible for the spot arrive along single ME, the transversal thickness R of ME is smaller than the radius of hole, say of of order of wavelength, ME itself effectively defines the hole with radius R and the value of sin(θn) does not depend on the value of d for d>R. Even ordinary photons arriving along MEs of this kind could give rise to an anomalous diffraction pattern. Note that the transversal thickness of ME need not be fixed however. It however seems that MEs are now macroscopic.

4. A similar effect results as one looks at an intense light source: bright spots appear in the visual field as one closes the eyes. If there is some more mundane explanation (I do not doubt this!), it must apply in both cases and explain also why the spots have precisely defined color rather than being white.

5. The only mention about effects of diffractive aberration effects are colored rings around say disk like objects analogous to colors around shadow of say disk like object. The radii of these diffraction rings in this case scale like wavelengths and distance from the object.

The experimentation of Samppa using digi camera demonstrated the appearance of colored spots in the pictures. If I have understood correctly, the sensors defining the pixels of the picture are in the focal plane and the diffraction for large Planck constant might explain the phenomenon. Since I did not have the idea about diffractive mechanism in mind, I did not check whether fainter colored rings might surround the bright spot. In any case, the readily testable prediction is that zooming to bright light source by reducing the size of the aperture should increase the size and number of the colored spots. As a matter fact, experimentation demonstrated that focusing brought in large number of these spots but we did not check whether the size was increased.

For details see the chapter Dark Nuclear Physics and Condensed Matter.

### Burning salt water with radio waves and large Planck constant

This morning my friend Samuli Penttinen send an email telling about strange discovery by engineer John Kanzius: salt water in the test tube radiated by radiowaves at harmonics of a frequency f=13.56 MHz burns. Temperatures about 1500 K which correspond to .15 eV energy have been reported. You can radiate also hand but nothing happens. The orginal discovery of Kanzius was the finding that radio waves could be used to cure cancer by destroying the cancer cells. The proposal is that this effect might provide new energy source by liberating chemical emergy in an exceptionally effective manner. The power is about 200 W so that the power used could explain the effect if it is absorbed in resonance like manner by salt water.

The energies of photons involved are very small, multiples of 5.6× 10-8 eV and their effect should be very small since it is difficult to imagine what resonant molecular transition could cause the effect. This leads to the question whether the radio wave beam could contain a considerable fraction of dark photons for which Planck constant is larger so that the energy of photons is much larger. The underlying mechanism would be phase transition of dark photons with large Planck constant to ordinary photons with shorter wavelength coupling resonantly to some molecular degrees of freedom and inducing the heating. Microwave oven of course comes in mind immediately.

1. The fact that the effects occur at harmonics of the fundamental frequency suggests that rotational states of molecules are in question as in microwave heating. Since the presence of salt is essential, the first candidate for the molecule in question is NaCl but also HCl can be considered. The basic formula for the rotational energies is

E(l)= E0×(l(l+1), E0=hbar2/2μR2. μ= m1 m2/(m1 +m2).

Here R is molecular radius which by definition is deduced from the rotational energy spectrum. The energy inducing transition l→l+1 is ΔE(l)= 2E0×(l+1).

2. By going to Wikipedia, one can find molecular radii of heteronuclear di-atomic molecules such as NaCl and homonuclear di-atomic molecules such as H2. Using E0(H2)=8.0×10-3 eV one obtains by scaling

E0(NaCl)= (μ(H2/μ(NaCl)) × (R(H2)/R(NaCL)2.

The atomic weights are A(H)=1, A(Na)=23, A(Cl)=35.

3. A little calculation gives f(NaCl)= 2E0/h= 14.08 GHz. The ratio to the radiowave frequency is f(NaCl)/f= 1.0386×103 to be compared with the hbar/hbar0=210=1.024×103. The discrepancy is 1 per cent.

Thus dark radiowave photons could induce a rotational microwave heating of the sample and the effect could be seen as an additional dramatic support for the hierarchy of Planck constants. There are several questions to be answered.

1. Does this effect occur also for solutions of other molecules and other solutes than water? This can be tested since the rotational spectra are readily calculable from data which can be found at net.

2. Are the radiowave photons dark or does water - which is very special kind of liquid - induce the transformation of ordinary radiowave photons to dark photons by fusing 210 radiowave massless extremals (MEs) to single ME. Does this transformation occur for all frequencies? This kind of transformation might play a key role in transforming ordinary EEG photons to dark photons and partially explain the special role of water in living systems.

3. Why the radiation does not induce spontaneous combustion of living matter which contains salt. And why cancer cells seem to burn: is salt concentration higher inside them? As a matter fact, there are reports about spontaneous human combustion. One might hope that there is a mechanism inhibiting this since otherwise military would be soon developing new horror weapons unless it is doing this already now. Is it that most of salt is ionized to Na+ and Cl- ions so that spontaneous combustion can be avoided? And how this relates to the sensation of spontaneous burning - a very painful sensation that some part of body is burning?

4. Is the energy heating solely due to rotational excitations? It might be that also a "dropping" of ions to larger space-time sheets is induced by the process and liberates zero point kinetic energy. The dropping of proton from k=137 (k=139) atomic space-time sheet liberates about .5 eV (0.125 eV). The measured temperature corresponds to the energy .15 eV. This dropping is an essential element of remote metabolism and provides universal metabolic energy quanta. It is also involved with TGD based models of "free energy" phenomena. No perpetuum mobile is predicted since there must be a mechanism driving the dropped ions back to the original space-time sheets.

Recall that one of the empirical motivations for the hierarchy of Planck constants came from the observed quantum like effects of ELF em fields at EEG frequences on vertebrate brain and also from the correlation of EEG with brain function and contents of consciousness difficult to understand since the energies of EEG photons are ridiculously small and should be masked by thermal noise.

In TGD based model of EEG (actually fractal hierarchy of EEGs) the values hbar/hbar0 =2k11, k=1,2,3,..., of Planck constant are in a preferred role. More generally, powers of two of a given value of Planck constant are preferred, which is also in accordance with p-adic length scale hypothesis.

For details see the chapter Dark Nuclear Physics and Condensed Matter.

### Blackhole production at LHC and replacement of ordinary blackholes with super-canonical blackholes

Tommaso Dorigo has an interesting posting about blackhole production at LHC. I have never taken this idea seriously but in a well-defined sense TGD predicts blackholes associated with super-canonical gravitons with strong gravitational constant defined by the hadronic string tension. The proposal is that super-canonical blackholes have been already seen in Hera, RHIC, and the strange cosmic ray events (see the previous posting). Ordinary blackholes are naturally replaced with super-canonical blackholes in TGD framework, which would mean a profound difference between TGD and string models.

Super-canonical black-holes are dark matter in the sense that they have no electro-weak interactions and they could have Planck constant larger than the ordinary one so that the value of αsK=1/4 is reduced. The condition that αK has the same value for the super-canonical phase as it has for ordinary gauge boson space-time sheets gives hbar=26×hbar0. With this assumption the size of the baryonic super-canonical blacholes would be 46 fm, the size of a big nucleus, and would define the fundamental length scale of nuclear physics.

1. RHIC and super-canonical blackholes

In high energy collisions of nuclei at RHIC the formation of super-canonical blackholes via the fusion of nucleonic space-time sheets would give rise to what has been christened a color glass condensate. Baryonic super-canonical blackholes of M107 hadron physics would have mass 934.2 MeV, very near to proton mass. The mass of their M89 counterparts would be 512 times higher, about 478 GeV. The "ionization energy" for Pomeron, the structure formed by valence quarks connected by color bonds separating from the space-time sheet of super-canonical blackhole in the production process, corresponds to the total quark mass and is about 170 MeV for ordinary proton and 87 GeV for M89 proton. This kind of picture about blackhole formation expected to occur in LHC differs from the stringy picture since a fusion of the hadronic mini blackholes to a larger blackhole is in question.

An interesting question is whether the ultrahigh energy cosmic rays having energies larger than the GZK cut-off (see the previous posting) are baryons, which have lost their valence quarks in a collision with hadron and therefore have no interactions with the microwave background so that they are able to propagate through long distances.

2. Ordinary blackholes as super-canonical blackholes

In neutron stars the hadronic space-time sheets could form a gigantic super-canonical blackhole and ordinary blackholes would be naturally replaced with super-canonical blackholes in TGD framework (only a small part of blackhole interior metric is representable as an induced metric).

1. Hawking-Bekenstein blackhole entropy would be replaced with its p-adic counterpart given by

Sp= (M/m(CP2))2× log(p),

where m(CP2) is CP2 mass, which is roughly 10-4 times Planck mass. M corresponds to the contribution of p-adic thermodynamics to the mass. This contribution is extremely small for gauge bosons but for fermions and super-canonical p"../articles/ it gives the entire mass.

2. If p-adic length scale hypothesis p≈2k holds true, one obtains

Sp= k log(2)×(M/m(CP2))2 ,

m(CP2)=hbar/R, R the "radius" of CP2, corresponds to the standard value of hbar0 for all values of hbar.

3. Hawking Bekenstein area law gives in the case of Schwartschild blackhole

S= hbar×A/4G = hbar×πGM2.

For the p-adic variant of the law Planck mass is replaced with CP2 mass and klog(2)≈ log(p) appears as an additional factor. Area law is obtained in the case of elementary p"../articles/ if k is prime and wormhole throats have M4 radius given by p-adic length scale Lk=k1/2RCP2, which is exponentially smaller than Lp.

For macroscopic super-canonical black-holes modified area law results if the radius of the large wormhole throat equals to Schwartschild radius. Schwartschild radius is indeed natural: I have shown that a simple deformation of the Schwartschild exterior metric to a metric representing rotating star transforms Schwartschild horizon to a light-like 3-surface at which the signature of the induced metric is transformed from Minkowskian to Euclidian (see this).

4. The formula for the gravitational Planck constant appearing in the Bohr quantization of planetary orbits and characterizing the gravitational field body mediating gravitational interaction between masses M and m (see this) reads as

hbargr/hbar0=GMm/v0 .

v0=2-11 is the preferred value of v0. One could argue that the value of gravitational Planck constant is such that the Compton length hbargr/M of the black-hole equals to its Schwartshild radius. This would give

hbargr/hbar0= GM2/v0 , v0=1/2 .

This is a natural generalization of the Nottale's formula to gravitational self interactions. The requirement that hbargr is a ratio of ruler-and-compass integers expressible as a product of distinct Fermat primes (only four of them are known) and power of 2 would quantize the mass spectrum of black hole. Even without this constraint M2 is integer valued using p-adic mass squared unit and if p-adic length scale hypothesis holds true this unit is in an excellent approximation power of two.

5. The gravitational collapse of a star would correspond to a process in which the initial value of v0, say v0 =2-11, increases in a stepwise manner to some value v0≤1/2. For a supernova with solar mass with radius of 9 km the final value of v0 would be v0=1/6. The star could have an onion like structure with largest values of v0 at the core. Powers of two would be favored values of v0. If the formula holds true also for Sun one obtains 1/v0= 3×17× 213 with 10 per cent error.

6. Blackhole evaporation could be seen as means for the super-canonical blackhole to get rid of its electro-weak charges and fermion numbers (except right handed neutrino number) as the antip"../articles/ of the emitted p"../articles/ annihilate with the p"../articles/ inside super-canonical blackhole. This kind of minimally interacting state is a natural final state of star. Ideal super-canonical blackhole would have only angular momentum and right handed neutrino number.

7. In TGD light-like partonic 3-surfaces are the fundamental objects and space-time interior defines only the classical correlates of quantum physics. The space-time sheet containing the highly entangled cosmic string might be separated from environment by a wormhole contact with size of black-hole horizon. This looks the most plausible option but one can of course ask whether the large partonic 3-surface defining the horizon of the black-hole actually contains all super-canonical p"../articles/ so that super-canonical black-hole would be single gigantic super-canonical parton. The interior of super-canonical blackhole would be space-like region of space-time, perhaps resulting as a large deformation of CP2 type vacuum extremal. Blackhole sized wormhole contact would define a gauge boson like variant of blackhole connecting two space-time sheets and getting its mass through Higgs mechanism. A good guess is that these states are extremely light.

### Pomeron, valence quarks, and super-canonical dark matter

The recent developments in the understanding of hadron mass spectrum involve the realization that hadronic k=107 space-time sheet is a carrier of super-canonical bosons (and possibly their super-counterparts with quantum numbers of right handed neutrino) (see this) . The model leads to amazingly simple and accurate mass formulas for hadrons. Most of the baryonic momentum is carried by super-canonical quanta: valence quarks correspond in proton to a relatively small fraction of total mass: about 170 MeV. The counterparts of string excitations correspond to super-canonical many-particle states and the additivity of conformal weight proportional to mass squared implies stringy mass formula and generalization of Regge trajectory picture. Hadronic string tension is predicted correctly. Model also provides a solution to the proton spin puzzle.

In this framework valence quarks would correspond to a color singlet state formed by space-time sheets connected by color flux tubes having no Regge trajectories and carrying a relatively small fraction of baryonic momentum. This kind structure, known as Pomeron, was the anomalous part of hadronic string model. Valence quarks would thus correspond to Pomeron.

1. Experimental evidence for Pomeron

Pomeron originally introduced to describe hadronic diffractive scattering as the exchange of Pomeron Regge trajectory [1]. No hadrons belonging to Pomeron trajectory were however found and via the advent of QCD Pomeron was almost forgotten. Pomeron has recently experienced reincarnation [2,3,4]. In Hera e-p collisions, where proton scatters essentially elastically whereas jets in the direction of incoming virtual photon emitted by electron are observed. These events can be understood by assuming that proton emits color singlet particle carrying small fraction of proton's momentum. This particle in turn collides with virtual photon (antiproton) whereas proton scatters essentially elastically.

The identification of the color singlet particle as Pomeron looks natural since Pomeron emission describes nicely diffractive scattering of hadrons. Analogous hard diffractive scattering events in pX diffractive scattering with X=anti-p [3] or X=p [4] have also been observed. What happens is that proton scatters essentially elastically and emitted Pomeron collides with X and suffers hard scattering so that large rapidity gap jets in the direction of X are observed. These results suggest that Pomeron is real and consists of ordinary partons.

2. Pomeron as the color bonded structure formed by valence quarks

In TGD framework the natural identification of Pomeron is as valence The lightness and electro-weak neutrality of Pomeron support the view that photon stripes valence quarks from Pomeron, which continues its flight more or less unperturbed. Instead of an actual topological evaporation the bonds connecting valence quarks to the hadronic space-time sheet could be stretched during the collision with photon.

The large value of αK=1/4 for super-canonical matter suggests that the criterion for a phase transition increasing the value of Planck constant (this) and leading to a phase, where αK propto 1/hbar is reduced, could occur. For αK to remain invariant, hbar0→ 26×hbar0 would be required. In this case, the size of hadronic space-time sheet, "color field body of the hadron", would be 26× L(107)=46 fm, roughly the size of the heaviest nuclei. Note that the sizes of electromagnetic field bodies of current quarks u and d with masses of order few MeV is not much smaller than the Compton length of electron. This would mean that super-canonical bosons would represent dark matter in a well-defined sense and Pomeron exchange would represent a temporary separation of ordinary and dark matter.

Note however that the fact that super-canonical bosons have no electro-weak interactions, implies their dark matter character even for the ordinary value of Planck constant: this could be taken as an objection against dark matter hierarchy. My own interpretation is that super-canonical matter is dark matter in the strongest sense of the world whereas ordinary matter in the large hbar phase is only apparently dark matter because standard interactions do not reveal themselves in the expected manner.

3. Astrophysical counterpart of Pomeron events

Pomeron events have a direct analogy in astrophysical length scales. I have commented about this already earlier. In the collision of two galaxies dark and visible matter parts of the colliding galaxies have been found to separate by Chandra X-ray Observatory.

Imagine a collision between two galaxies. The ordinary matter in them collides and gets interlocked due to the mutual gravitational attraction. Dark matter, however, just keeps its momentum and keeps going on leaving behind the colliding galaxies. This kind of event has been detected by the Chandra X-Ray Observatory by using an ingenious manner to detect dark matter. Collisions of ordinary matter produces a lot of X-rays and the dark matter outside the galaxies acts as a gravitational lens.

4. Super-canonical bosons and anomalies of hadron physics

Super-canonical bosons suggest a solution to several other anomalies related to hadron physics. Spin puzzle of proton has been already discussed in previous postings.

The events observed for a couple of years ago in RHIC (see this) suggest a creation of a black-hole like state in the collision of heavy nuclei and inspire the notion of color glass condensate of gluons, whose natural identification in TGD framework would be in terms of a fusion of hadronic space-time sheets containing super-canonical matter materialized also from the collision energy. The black-hole states would be black-holes of strong gravitation with gravitational constant determined by hadronic string tension and gravitons identifiable as J=2 super-canonical bosons. The topological condensation of mesonic and baryonic Pomerons created from collision energy on the condensate would be analogous to the sucking of ordinary matter by real black-hole. Note that also real black holes would be dense enough for the formation of condensate of super-canonical bosons but probably with much large value of Planck constant. Neutron stars could contain hadronic super-canonical condensate.

In the collision, valence quarks connected together by color bonds to form separate units would evaporate from their hadronic space-time sheets in the collision just like in the collisions producing Pomeron. The strange features of the events related to the collisions of high energy cosmic rays with hadrons of atmosphere (the p"../articles/ in question are hadron like but the penetration length is anomalously long and the rate for the production of hadrons increases as one approaches surface of Earth) could be also understood in terms of the same general mechanism.

5. Fashions and physics

The story of Pomeron is a good example about the destructive effect of reductionism, fashions, and career constructivism in the recent day theoretical physics.

For more than thirty years ago we had hadronic string model providing satisfactory qualitative view about non-perturbative aspects of hadron physics. Pomeron was the anomaly. Then came QCD and both hadronic string model and Pomeron were forgotten and low energy hadron physics became the anomaly. No one asked whether valence quarks might relate to Pomeron and whether stringy aspects could represent something which does not reduce to QCD.

To have some use for strings it was decided that superstring model describes not only gravitation but actually everything and now we are in a situation in which people are wasting their time with AdS/CFT duality based model in which N=4 super-symmetric theory is decided to describe hadrons. This theory does not contain even quarks, only spartners of gluons, and conclusions are based on study of the limit in which one has infinite number of quark colors. The science historians of future will certainly identify the last thirty years as the weirdest period in theoretical physics.

References

[1] N. M. Queen, G. Violini (1974), {\em Dispersion Theory in High Energy Physics}, The Macmillan Press Limited.

[2] M. Derrick et al(1993), Phys. Lett B 315, p. 481.

[3] A. Brandt et al (1992), Phys. Lett. B 297, p. 417.

[4] A. M. Smith et al(1985), Phys. Lett. B 163, p. 267.

### Does the spin of hadron correlate with its super-canonical boson content?

The revision of hadronic mass calculations is still producing pleasant surprises. The explicit comparison of the super-canonical conformal weights associated with spin 0 and spin 1 states on one hand and spin 1/2 and spin 3/2 states on the other hand (see this) demonstrates that the difference between these states could be understood in terms of super-canonical particle contents of the states by introducing only single additional negative conformal weight sc describing color Coulombic binding . sc is constant for baryons(sc=-4) and in the case of mesons non-vanishing only for pions (sc=-5) and kaons (sc=-12). This leads to an excellent prediction for the masses also in the meson sector since pseudoscalar mesons heavier than kaon are not Golstone boson like states in this model. Deviations of predicted and actual masses are typically below per cent and second order contributions can explain the discrepancy. There is also consistency with string bounds from top quark mass.

The correlation of the spin of quark-system with the particle content of the super-canonical sector increases dramatically the predictive power of the model if the allowed conformal weights of super-canonical bosons are assumed to be identical with U type quarks and thus given by (5,6,58) for the three generations. One can even consider the possibility that also exotic hadrons with different super-canonical particle content exist: this means a natural generalization of the notion of Regge trajectories. The next task would be to predict the correlation of hadron spin with super-canonical particle content in the case of long-lived hadrons.

The progress in the understanding Kähler coupling strength led to considerable increase in the understanding of hadronic masses. I list those points which are of special importance elements for the revised model.

1. Higgs contribution to fermion masses is negligible

There are good reasons to believe that Higgs expectation for the fermionic space-time sheets is vanishing although fermions couple to Higgs. Thus p-adic thermodynamics would explain fermion masses completely. This together with the fact that the prediction of the model for the top quark mass is consistent with the most recent limits on it, fixes the CP2 mass scale with a high accuracy to the maximal one obtained if second order contribution to electron's p-adic mass squared vanishes. This is very strong constraint on the model.

2. The p-adic length scale of quark is dynamical

The assumption about the presence of scaled up variants of light quarks in light hadrons is not new. It leads to a surprisingly successful model for pseudo scalar meson masses using only quark masses and the assumption mass squared is additive for quarks with same p-adic length scale and mass for quarks labelled by different primes p. This conforms with the idea that pseudo scalar mesons are Goldstone bosons in the sense that color Coulombic and magnetic contributions to the mass cancel each other. Also the mass differences between hadrons containing different numbers of strange and heavy quarks can be understood if s, b and c quarks appear as several scaled up versions.

This hypothesis yields surprisingly good fit for meson masses but for some mesons the predicted mass is slightly too high. The reduction of CP2 mass scale to cure the situation is not possible since top quark mass would become too low. In case of diagonal mesons for which quarks correspond to same p-adic prime, quark contribution to mass squared can be reduced by ordinary color interactions and in the case of non-diagonal mesons one can require that quark contribution is not larger than meson mass.

3. Super-canonical bosons at hadronic space-time sheet can explain the constant contribution to baryonic masses

Quarks explain only a small fraction of the baryon mass and that there is an additional contribution which in a good approximation does not depend on baryon. This contribution should correspond to the non-perturbative aspects of QCD.

A possible identification of this contribution is in terms of super-canonical gluons predicted by TGD. Baryonic space-time sheet with k=107 would contain a many-particle state of super-canonical gluons with net conformal weight of 16 units. This leads to a model of baryons masses in which masses are predicted with an accuracy better than 1 per cent. Super-canonical gluons also provide a possible solution to the spin puzzle of proton. One ends up also to a prediction αsK=1/4 at hadronic space-time sheet.

Hadronic string model provides a phenomenological description of non-perturbative aspects of QCD and a connection with the hadronic string model indeed emerges. Hadronic string tension is predicted correctly from the additivity of mass squared for J= bound states of super-canonical quanta. If the topological mixing for super-canonical bosons is equal to that for U type quarks then a 3-particle state formed by super-canonical quanta from the first generation and 1 quantum from the second generation would define baryonic ground state with 16 units of conformal weight.

In the case of mesons pion could contain super-canonical boson of first generation preventing the large negative contribution of the color magnetic spin-spin interaction to make pion a tachyon. For heavier bosons super-canonical boson need not to be assumed. The preferred role of pion would relate to the fact that its mass scale is below QCD Λ.

4. Description of color magnetic spin-spin splitting in terms of conformal weight

What remains to be understood are the contributions of color Coulombic and magnetic interactions to the mass squared. There are contributions coming from both ordinary gluons and super-canonical gluons and the latter is expected to dominate by the large value of color coupling strength.

Conformal weight replaces energy as the basic variable but group theoretical structure of color magnetic contribution to the conformal weight associated with hadronic space-time sheet ($k=107$) is same as in case of energy. The predictions for the masses of mesons are not so good than for baryons, and one might criticize the application of the format of perturbative QCD in an essentially non-perturbative situation.

The comparison of the super-canonical conformal weights associated with spin 0 and spin 1 states and spin 1/2 and spin 3/2 states shows that the different masses of these states could be understood in terms of the super-canonical particle contents of the state correlating with the total quark spin. The resulting model allows excellent predictions also for the meson masses and implies that only pion and kaon can be regarded as Goldstone boson like states. The model based on spin-spin splittings is consistent with model.

To sum up, the model provides an excellent understanding of baryon and meson masses. This success is highly non-trivial since the fit involves only the integers characterizing the p-adic length scales of quarks and the integers characterizing color magnetic spin-spin splitting plus p-adic thermodynamics and topological mixing for super-canonical gluons. The next challenge would be to predict the correlation of hadron spin with super-canonical particle content in the case of long-lived hadrons.

### A connection with hadronic string model

In the previous posting I described the realization that so called super-canonical degrees of freedom (super Kac-Moody algebra associated with symplectic (canonical) transformations of M4+/-× CP2 (light-cone boundary in a loose terminology) is responsible for the non-perturbative aspects of hadron physics. One can say that the notion of hadronic space-time sheet characterized by Mersenne prime M107 and responsible for the non-perturbative aspects of hadron physics finds a precise quantitative theoretical articulation in terms of super-canonical symmetry. Note that besides bosonic generators also the super counterparts of the bosonic generators carrying quantum numbers of right handed neutrino are present and could give rise to super-counterparts of hadrons. It might not be easy to distinguish them from ordinary hadrons.

1. Quantitative support for the role of super-canonical algebra

Quantitative calculations for hadron masses (still under progress) support this picture and one can predict correctly the previously unidentified large contribution to the masses spin 1/2 baryons in terms of a bound state of g=1 (genus) super-canonical gluons with color binding conformal weight of 2 units reducing the net conformal weight of 2-gluon state from 18 to 16. An alternative picture is that super-canonical gluons suffer same topological mixing as U type quarks so that the conformal weights are (5,6,58). In this case ground state could contain two super-canonical gluons of first generation and one of second generation (5+5+6=16).

I thought first that in the case of mesons this contribution might not be present. There could be however single super-scanonical meson present inside pion and rho meson with conformal weight 5 (!) and it would prevent color magnetic binding conformal weight to make pion a tachyon. The special role of π-ρ system would be due to the fact that pion mass is below QCD Λ. If no mixing occurs, g=0 gluons would define the analog of gluonic component of parton sea and bringing in additional color interaction besides the one mediated by ordinary gluons and having very strong color coupling strength αsK=1/4. This contribution is compensated by the color magnetic spin-spin splitting and color Coulombic energy in the case of pseudoscalars in accordance with the idea that pseudoscalars are Golstone bosons apart from the contribution of quarks to the mass of the meson.

Quite generally, one can say that super-canonical sector adds to the theory the non-perturbative aspects of hadron physics which become important at low energies. This contribution is something which QCD cannot yield in any circumstances since color group has geometric meaning in TGD being represented as color rotations of CP2.

2. Hadronic strings and super-canonical algebra

Hadronic string model provides a phenomenological description of the non-perturbative aspects of hadron physics, and TGD was born both as a resolution of energy problem of general relativity and as a generalization of the hadronic string model. Hence one can ask whether something resembling hadronic string model might emerge from the super-canonical sector. TGD allows string like objects but the fundamental string tension is gigantic, roughly a factor 10-8 of that defined by Planck constant. An extremely rich spectrum of vacuum extremals is predicted and the expectation motivated by the p-adic length scale hypothesis is that vacuum extremals deformed to non-vacuum extremals give rise to a hierarchy of string like objects with string tension T propto 1/Lp2, Lp the p-adic length scale. p-Adic length scale hypothesis states that primes p≈2k are preferred. Also a hierarchy of QCD like physics is predicted.

The challenge has been the identification of quantum counterpart of this picture and p-adic physics leads naturally to it.

1. The fundamental mass formula of the string model relates mass squared and angular momentum of the stringy state. It has the form

M2=M02J ,

M02≈ .9 GeV2.

A more general formula is M2=kn.

2. This kind of formula results from the additivity of the conformal weight (and thus mass squared) for systems characterized by same value of p-adic prime if one constructs a many particle state from g=1 super-canonical bosons with a thermal mass squared M2=M02n, M02=n0m1072. The angular momentum of the building blocks has some spectrum fixed by Virasoro conditions. If the basic building block has angular momentum J0 and mass squared M02, one obtains M2= M02 J, k=M02, J= nJ0. The values of n are even in old fashioned string model for a Regge trajectory with a fixed parity. J0=2 implies the same result so that basic unit might be called "strong graviton". Of course, also J=0 states with the same mass are expected to be there and are actually required by the explanation of the spin puzzle of proton.
3. g=1 super-canonical gluons has mass squared

M02= 9m1072.

The bound states of super-canonical bosons with net mass squared M02= 16m1072

are responsible for the ground state mass of baryons in the model predicting baryon masses with few per cent accuracy. The value of M02 is .88 GeV2 to be compared with its nominal value .9 GeV2 so that also hadronic string tension is predicted correctly!

This picture allows also to consider a possible mechanism explaining spin puzzle of proton and I have already earlier considered an explanation in terms of super-canonical spin (see this) assuming that the state is a superposition of ordinary (J=0,Jq=1/2) state and (J=2,Jq=3/2) state in which super-canonical bound state has spin 2.

To sum up, combining these results with earlier ones one can say that besides elementary particle masses all basic parameters of hadronic physics are predicted correctly from p-adic length scale hypothesis plus simple number theoretical considerations involving only integer arithmetics. This is quite an impressive result. To my humble opinion, it would be high time for the string people and other colleagues to realize that they have already lost the boat badly and the situation worsens if they refuse to meet the reality described so elegantly by TGD. There is enormous amount of work to be carried out and the early bird gets the worm;-).

### Progress in the understanding of baryon masses

In the previous posting I explained the progress made in understanding of mesonic masses basically due to the realization how the Chern-Simons coupling k determines Kähler coupling strength and p-adic temperature discussed in still earlier posting.

Today I took a more precise look at the baryonic masses. It the case of scalar mesons quarks give the dominating contribution to the meson mass. This is not true for spin 1/2 baryons and the dominating contribution must have some other origin. The identification of this contribution has remained a challenge for years.

A realization of a simple numerical co-incidence related to the p-adic mass squared unit led to an identification of this contribution in terms of states created by purely bosonic generators of super-canonical algebra and having as a space-time correlate CP2 type vacuum extremals topologically condensed at k=107 space-time sheet (or having this space-time sheet as field body). Proton and neutron masses are predicted with .5 per cent accuracy and Δ-N mass splitting with .2 per cent accuracy. A further outcome is a possible solution to the spin puzzle of proton.

1. Does k=107 hadronic space-time sheet give the large contribution to baryon mass?

In the sigma model for baryons the dominating contribution to the mass of baryon results as a vacuum expectation value of scalar field and mesons are analogous to Goldstone bosons whose masses are basically due to the masses of light quarks.

This would suggest that k=107 gluonic/hadronic space-time sheet gives a large contribution to the mass squared of baryon. p-Adic thermodynamics allows to expect that the contribution to the mass squared is in good approximation of form

Δm2= nm2(107),

where m2(107) is the minimum possible p-adic mass mass squared and n a positive integer. One has m(107)=210m(127)= 210me51/2=233.55 MeV for Ye=0 favored by the top quark mass.

1. n=11 predicts (m(n),m(p))=(944.5, 939.3) MeV: the actual masses are (m(n),m(p)=(939.6,938.3) MeV. Coulombic repulsion between u quarks could reduce the p-n difference to a realistic value.

2. λ-n mass splitting would be 184.7 MeV for k(s)=111 to be compared with the real difference which is 176.0 MeV. Note however that color magnetic spin-spin splitting requires that the ground state mass squared is larger than 11m02(107).

2. What is responsible for the large ground state mass of the baryon?

The observations made above do not leave much room for alternative models. The basic problem is the identification of the large contribution to the mass squared coming from the hadronic space-time sheet with k=107. This contribution could have the energy of color fields as a space-time correlate.

1. The assignment of the energy to the vacuum expectation value of sigma boson does not look very promising since the very of existence sigma boson is questionable and it does not relate naturally to classical color gauge fields. More generally, since no gauge symmetry breaking is involved, the counterpart of Higgs mechanism as a development of a coherent state of scalar bosons does not look like a plausible idea.

2. One can however consider the possibility of Bose-Einstein condensate or of a more general many-particle state of massive bosons possibly carrying color quantum numbers. A many-boson state of exotic bosons at k=107 space-time sheet having net mass squared

m2=nm02(107), n=∑i ni

could explain the baryonic ground state mass. Note that the possible values of ni are predicted by p-adic thermodynamics with Tp=1.

3. Glueballs cannot be in question

Glueballs (see this and this) define the first candidate for the exotic boson in question. There are however several objections against this idea.

1. QCD predicts that lightest glue-balls consisting of two gluons have JPC= 0++ and 2++ and have mass about 1650 MeV. If one takes QCD seriously, one must exclude this option. One can also argue that light glue balls should have been observed long ago and wonder why their Bose-Einstein condensate is not associated with mesons.

2. There are also theoretical objections in TGD framework.

• Can one really apply p-adic thermodynamics to the bound states of gluons? Even if this is possible, can one assume the p-adic temperature Tp=1 for them if it is quite generally Tp=1/26 for gauge bosons consisting of fermion-antifermion pairs (see this).

• Baryons are fermions and one can argue that they must correspond to single space-time sheet rather than a pair of positive and negative energy space-time sheets required by the glueball Bose-Einstein condensate realized as wormhole contacts connecting these space-time sheets.

4. Do exotic colored bosons give rise to the ground state mass of baryon?

The objections listed above lead to an identification of bosons responsible for the ground state mass, which looks much more promising.

1. TGD predicts exotic bosons, which can be regarded as super-conformal partners of fermions created by the purely bosonic part of super-canonical algebra, whose generators belong to the representations of the color group and 3-D rotation group but have vanishing electro-weak quantum numbers. Their spin is analogous to orbital angular momentum whereas the spin of ordinary gauge bosons reduces to fermionic spin. Thus an additional bonus is a possible solution to the spin puzzle of proton.

2. Exotic bosons are single-sheeted structures meaning that they correspond to a single wormhole throat associated with a CP2 type vacuum extremal and would thus be absent in the meson sector as required. Tp=1 would characterize these bosons by super-conformal symmetry. The only contribution to the mass would come from the genus and g=0 state would be massless so that these bosons cannot condense on the ground state unless they suffer topological mixing with higher genera and become massive in this manner. g=1 glueball would have mass squared 9m02(k) which is smaller than 11m02. For a ground state containing two g=1 exotic bosons, one would have ground state mass squared 18m02 corresponding to (m(n),m(p))=(1160.8,1155.6) MeV. Electromagnetic Coulomb interaction energy can reduce the p-n mass splitting to a realistic value.

3. Color magnetic spin-spin splitting for baryons gives a test for this hypothesis. The splitting of the conformal weight is by group theoretic arguments of the same general form as that of color magnetic energy and given by (m2(N),m2(Δ))= (18m02-X, 18m02+X) in absence of topological mixing. n=11 for nucleon mass implies X=7 and m(Δ) =5m0(107)= 1338 MeV to be compared with the actual mass m(Δ)= 1232 MeV. The prediction is too large by about 8.6 per cent. If one allows topological mixing one can have m2=8m02 instead of 9m02. This gives m(Δ)=1240 MeV so that the error is only .6 per cent. The mass of topologically mixed exotic boson would be 660.6 MeV and equals m02(104). Amusingly k=104 happens to corresponds to the inverse of αK for gauge bosons.

4. In the simplest situation a two-particle state of these exotic bosons could be responsible for the ground state mass of baryon. Also the baryonic spin puzzle caused by the fact that quarks give only a small contribution to the spin of baryons, could find a natural solution since these bosons could give to the spin of baryon an angular momentum like contribution having nothing to do with the angular momentum of quarks.

5. The large value of the Kähler coupling strength αK=1/4 would characterize the hadronic space-time sheet as opposed to αK=1/104 assignable to the gauge boson space-time sheets. This would make the color gauge coupling characterizing their interactions strong. This would be a precise articulation for what the generation of the hadronic space-time sheet in the phase transition to a non-perturbative phase of QCD really means.

6. The identification would also lead to a physical interpretation of super(-conformal) symmetries. It must be emphasized the super-canonical generators do not create ordinary fermions so that ordinary gauge bosons need not have super-conformal partners. One can of course imagine that also ordinary gauge bosons could have super-partners obtained by assuming that one wormhole throat (or both of them) is purely bosonic. If both wormhole throats are purely bosonic Higgs mechanism would leave the state essentially massless unless p-adic thermal stability allows Tp=1. Color confinement could be responsible for the stability. For super-partners having fermion number Higgs mechanism would make this kind of state massive unless the quantum numbers are those of a right handed neutrino.

7. The importance of the result is that it becomes possible to derive general mass formulas also for the baryons of scaled up copies of QCD possibly associated with various Mersenne primes and Gaussian Mersennes. In particular, the mass formulas for "electro-baryons" and "muon-baryons" can be deduced (see this)

For more details about p-adic mass calculations of elementary particle masses see the chapter Massless p"../articles/ and particle massivation. The chapter p-Adic mass calculations: hadron masses describes the model for hadronic masses. The chapter p-Adic mass calculations: New Physics explains the new view about Kähler coupling strength.

### The model for hadron masses revisited

The blog of Tommaso Dorigo contains two postings which served as a partial stimulus to reconsider the model of hadron masses. The first posting is The top quark mass measured from its production rate and tells about new high precision determination of top quark mass reducing its value to the most probale value 169.1 GeV in allowed interval 164.7-175.5 GeV. Second posting Rumsfeld hadrons tells about "crackpottish" finding that the mass of Bc meson is in an excellent approximation average of the mass of Ψ and Υ mesons. TGD based model for hadron masses allows to understand this finding.

1. Motivations

There were several motivations for looking again the p-adic mass calculations for quarks and hadrons.

1. If one takes seriously the prediction that p-adic temperature is Tp=1 for fermions and Tp=1/26 for gauge bosons as suggested by the considerations of blog posting (see also this), and accepts the picture about fermions as topologically condensed CP2 type vacuum extremals with single light-like wormhole throat and gauge bosons and Higgs boson as wormhole contacts with two light-like wormhole throats and connecting space-time sheets with opposite time orientation and energy, one is led to the conclusion that although fermions can couple to Higgs, Higgs vacuum expectation value must vanish for fermions. One must check whether it is indeed possible to understand the fermion masses from p-adic thermodynamics without Higgs contribution. This turns out to be the case. This also means that the coupling of fermions to Higgs can be arbitrarily small, which could explain why Higgs has not been detected.

2. There has been some problems in understanding top quark mass in TGD framework. Depending on the selection of p-adic prime p≈ 2k characterizing top quark the mass is too high or too low by about 15-20 per cent. This problem had a trivial resolution: it was due to a calculational error due to inclusion of only the topological contribution depending on the genus of partonic 2-surface. The positive surprise was that the maximal value for CP2 mass corresponding to the vanishing of second order correction to electron mass and maximal value of the second order contribution to top mass predicts exactly the recent best value 169.1 GeV of top mass. This in turn allows to clean up uncertainties in the model of hadron masses.

2. The model for hadron masses

The basic assumptions in the model of hadron masses are following.

1. Quarks are characterized by two kinds of masses: current quark masses assignable to free quarks and constituent quark masses assignable to bound state quarks (see this). This can be understood if the integer kq characterizing the p-adic length scale of quark is different for free quarks and bound quarks so that bound state quarks are much heavier than free quarks. A further generalization is that the value of k can depend on hadron. This leads to an elegant model explaining meson and baryon masses within few percent. The model becomes more precise from the fixing of the CP2 mass scale from top mass (note that top quark is always free since toponium does not exist). This predicts several copies of various quarks and there is evidence for three copies of top corresponding to the values kt=95,94,93. Also current quarks u and d can correspond to several values of k.

2. The lowest mass mesonic formula is extremely simple. If the quarks characterized by same p-adic prime, their conformal weights and thus mass squared are additive:

m2B = m2q1+ m2q2.

If the p-adic primes labelling quarks are different masses are additive mB = mq1+ mq2.

This formula generalizes in an obvious manner to the case of baryons.

Thus apart from effects like color magnetic spin-spin splitting describable p-adically for diagonal mesons and in terms of color magnetic interaction energy in case of nondiagonal mesons, basic effect of binding is modification of the integer k labelling the quark.

3. The formula produces the masses of mesons and also baryons with few per cent accuracy. There are however some exceptions.

1. The mass of η' meson becomes slightly too large. In case of η' negative color binding conformal weight can reduce the mass. Also mixing with two gluon gluonium can save the situation.

2. Some light non-diagonal mesons such as K mesons have also slightly too large mass. In this case negative color binding energy can save the situation.

2. An example about how mesonic mass formulas work.

The mass formulas allow to understand why the "crackpottish" mass formula for Bc holds true.

The mass of the Bc meson (bound state of b and c quark and antiquark) has been measured with a precision by CDF (see the blog posting by Tommaso Dorigo) and is found to be M(Bc)=6276.5+/- 4.8 MeV. Dorigo notices that there is a strange "crackpottian" co-incidence involved. Take the masses of the fundamental mesons made of c anti-c (Ψ) and b anti-b (Υ), add them, and divide by two. The value of mass turns out to be 6278.6 MeV, less than one part per mille away from the Bc mass!

The general p-adic mass formulas and the dependence of kqon hadron explain the co-incidence. The mass of Bc is given as

m(Bc)= m(c,kc(Bc))+ m(b,kb(Bc)),

whereas the masses of Ψ and Υ are given by

m( Ψ)= 21/2m(c,kΨ)

and

m(Υ)= 21/2m(b,kΥ).

Assuming kc(Bc)= kc(Ψ) and kb(Bc)= kb(Υ) would give m(Bc)= 2-1/2[m( Ψ)+m( Υ)] which is by a factor 21/2 higher than the prediction of the "crackpot" formula. kc(Bc)= kc( Ψ)+1 and kb(Bc)= kb( Υ)+1 however gives the correct result.

As such the formula makes sense but the one part per mille accuracy must be an accident in TGD framework.

1. The predictions for Ψ and Υ masses are too small by 2 resp. 5 per cent in the model assuming no effective scaling down of CP2 mass.

2. The formula makes sense if the quarks are effectively free inside hadrons and the only effect of the binding is the change of the mass scale of the quark. This makes sense if the contribution of the color interactions, in particular color magnetic spin-spin splitting, to the heavy meson masses are small enough. Ψ and ηc have spins 1 and 0 and their masses differ by 3.7 per cent (m(ηc)=2980 MeV and m(Ψ)= 3096.9 MeV) so that color magnetic spin-spin splitting is measured using per cent as natural unit.

For more details about p-adic mass calculations of elementary particle masses see the chapter Massless p"../articles/ and particle massivation. The chapter p-Adic mass calculations: hadron masses describes the model for hadronic masses.

### Does the quantization of Kähler coupling strength reduce to the quantization of Chern-Simons coupling at partonic level?

Kähler coupling strength associated with Kähler action (Maxwell action for the induced Kähler form) is the only coupling constant parameter in quantum TGD, and its value (or values) is in principle fixed by the condition of quantum criticality since Kähler coupling strength is completely analogous to critical temperature. The quantum TGD at parton level reduces to almost topological QFT for light-like 3-surfaces. This almost TQFT involves Abelian Chern-Simons action for the induced Kähler form.

This raises the question whether the integer valued quantization of the Chern-Simons coupling k could predict the values of the Kähler coupling strength. I considered this kind of possibility already for more than 15 years ago but only the reading of the introduction of the recent paper by Witten about his new approach to 3-D quantum gravity led to the discovery of a childishly simple argument that the inverse of Kähler coupling strength could indeed be proportional to the integer valued Chern-Simons coupling k: 1/αK=4k if all factors are correct. k=26 is forced by the comparison with some physical input. Also p-adic temperature could be identified as Tp=1/k.

1. Quantization of Chern-Simons coupling strength

For Chern-Simons action the quantization of the coupling constant guaranteing so called holomorphic factorization is implied by the integer valuedness of the Chern-Simons coupling strength k. As Witten explains, this follows from the quantization of the first Chern-Simons class for closed 4-manifolds plus the requirement that the phase defined by Chern-Simons action equals to 1 for a boundaryless 4-manifold obtained by gluing together two 4-manifolds along their boundaries. As explained by Witten in his paper, one can consider also "anyonic" situation in which k has spectrum Z/n2 for n-fold covering of the gauge group and in dark matter sector one can consider this kind of quantization.

2. Formula for Kähler coupling strength

The quantization argument for k seems to generalize to the case of TGD. What is clear that this quantization should closely relate to the quantization of the Kähler coupling strength appearing in the 4-D Kähler action defining Kähler function for the world of classical worlds and conjectured to result as a Dirac determinant. The conjecture has been that gK2 has only single value. With some physical input one can make educated guesses about this value. The connection with the quantization of Chern-Simons coupling would however suggest a spectrum of values. This spectrum is easy to guess.

1. The U(1) counterpart of Chern-Simons action is obtained as the analog of the "instanton" density obtained from Maxwell action by replacing J wedge *J with J wedge J. This looks natural since for self dual J associated with CP2 extremals Maxwell action reduces to instanton density and therefore to Chern-Simons term. Also the interpretation as Chern-Simons action associated with the classical SU(3) color gauge field defined by Killing vector fields of CP2 and having Abelian holonomy is possible. Note however that instanton density is multiplied by imaginary unit in the action exponential of path integral. One should find justification for this "Wick rotation" not changing the value of coupling strength and later this kind of justification will be proposed.

2. Wick rotation argument suggests the correspondence k/4π = 1/4gK2 between Chern-Simons coupling strength and the Kähler coupling strength gK appearing in 4-D Kähler action. This would give

gK2=π/k .

The spectrum of 1/αK would be integer valued

1/αK=4k.

The result is very nice from the point of number theoretic vision since the powers of αK appearing in perturbative expansions would be rational numbers (ironically, radiative corrections might vanish but this might happen only for these rational values of αK!).

3. It is interesting to compare the prediction with the experimental constraints on the value of αK. The basic empirical input is that electroweak U(1) coupling strength reduces to Kähler coupling at electron length scale (see this). This gives αK= αU(1)(M127)≈ 104.1867, which corresponds to k=26.0467. k=26 would give αK= 104: the deviation would be only .2 per cent and one would obtain exact prediction for αU(1)(M127)! This would explain why the inverse of the fine structure constant is so near to 137 but not quite. Amusingly, k=26 is the critical space-time dimension of the bosonic string model. Also the conjectured formula for the gravitational constant in terms of αK and p-adic prime p involves all primes smaller than 26 (see this).

4. Note however that if k is allowed to have values in Z/n2, the strongest possible coupling strength is scaled to n2/4 if hbar is not scaled: already for n=2 the resulting perturbative expansion might fail to converge. In the scalings of hbar associated with M4 degrees of freedom hbar however scales as 1/n2 so that the spectrum of αK would remain invariant.

3. Justification for Wick rotation

It is not too difficult to believe to the formula 1/αK =qk, q some rational. q=4 however requires a justification for the Wick rotation bringing the imaginary unit to Chern-Simons action exponential lacking from Kähler function exponential.

In this kind of situation one might hope that an additional symmetry might come in rescue. The guess is that number theoretic vision could justify this symmetry.

1. To see what this symmetry might be consider the generalization of the Montonen-Olive duality obtained by combining theta angle and gauge coupling to single complex number via the formula

τ= θ/2π+i4π/g2.

What this means in the recent case that for CP2 type vacuum extremals (see this) Kähler action and instanton term reduce by self duality to Kähler action obtained by the replacement g2 with -iτ/4π. The first duality τ→τ+1 corresponds to the periodicity of the theta angle. Second duality τ→-1/τ corresponds to the generalization of Montonen-Olive duality α→ 1/α. These dualities are definitely not symmetries of the theory in the recent case.

2. Despite the failure of dualities, it is interesting to write the formula for τ in the case of Chern-Simons theory assuming gK2=π/k with k>0 holding true for Kac-Moody representations. What one obtains is

τ= 4k(1-i).

The allowed values of τ are integer spaced along a line whose direction angle corresponds to the phase exp(i2π/n), n=4. The transformations τ→ τ+ 4(1-i) generate a dynamical symmetry and as Lorentz transformations define a subgroup of the group E2 leaving invariant light-like momentum (this brings in mind quantum criticality!). One should understand why this line is so special. One should understand why this line is so special.

3. This formula conforms with the number theoretic vision suggesting that the allowed values of τ belong to an integer spaced lattice. Indeed, if one requires that the phase angles are proportional to vectors with rational components then only phase angles associated with orthogonal triangles with short sides having integer valued lengths m and n are possible. The additional condition that the phase angles correspond to roots of unity! This leaves only m=n and m=-n>0 into consideration so that one would have τ= n(1-i) from k>0.

4. Notice that theta angle is a multiple of 8kπ so that a trivial strong CP breaking results and no QCD axion is needed (this of one takes seriously the equivalence of Kähler action to the classical color YM action).

4. Is p-adicization needed and possible only in 3-D sense?

The action of CP2 type extremal is given as S=π/8αK= kπ/2. Therefore the exponent of Kähler action appearing in the vacuum functional would be exp(kπ) known to be a transcendental number (Gelfond's constant). Also its powers are transcendental. If one wants to p-adicize also in 4-D sense, this raises a problem.

Before considering this problem, consider first the 4-D p-adicization more generally.

1. The definition of Kähler action and Kähler function in p-adic case can be obtained only by algebraic continuation from the real case since no satisfactory definition of p-adic definite integral exists. These difficulties are even more serious at the level of configuration space unless algebraic continuation allows to reduce everything to real context. If TGD is integrable theory in the sense that functional integral over 3-surfaces reduces to calculable functional integrals around the maxima of Kähler function, one might dream of achieving the algebraic continuation of real formulas. Note however that for lightlike 3-surface the restriction to a category of algebraic surfaces essential for the re-interpretation of real equations of 3-surface as p-adic equations. It is far from clear whether also preferred extremals of Kähler action have this property.

2. Is 4-D p-adicization the really needed? The extension of light-like partonic 3-surfaces to 4-D space-time surfaces brings in classical dynamical variables necessary for quantum measurement theory. p-Adic physics defines correlates for cognition and intentionality. One can argue that these are not quantum measured in the conventional sense so that 4-D p-adic space-time sheets would not be needed at all. The p-adic variant for the exponent of Chern-Simons action can make sense using a finite-D algebraic extension defined by q=exp(i2π/n) and restricting the allowed lightlike partonic 3-surfaces so that the exponent of Chern-Simons form belongs to this extension of p-adic numbers. This restriction is very natural from the point of view of dark matter hierarchy involving extensions of p-adics by quantum phase q.

If one remains optimistic and wants to p-adicize also in 4-D sense, the transcendental value of the vacuum functional for CP2 type vacuum extremals poses a problem (not the only one since the p-adic norm of the exponent of Kähler action can become completely unpredictable).

1. One can also consider extending p-adic numbers by introducing exp(π) and its powers and possibly also π. This would make the extension of p-adics infinite-dimensional which does not conform with the basic ideas about cognition. Note that ep is not p-adic transcendental so that extension of p-adics by powers e is finite-dimensional and if p-adics are first extended by powers of π then further extension by exp(π) is p-dimensional.

2. A more tricky manner to overcome the problem posed by the CP2 extremals is to notice CP2 type extremals are necessarily deformed and contain a hole corresponding to the lightlike 3-surface or several of them. This would reduce the value of Kähler action and one could argue that the allowed p-adic deformations are such that the exponent of Kähler action is a p-adic number in a finite extension of p-adics. This option does not look promising.

5. Is the p-adic temperature proportional to the Kähler coupling strength?

Kähler coupling strength would have the same spectrum as p-adic temperature Tp apart from a multiplicative factor. The identification Tp=1/k is indeed very natural since also gK2 is a temperature like parameter. The simplest guess is

Tp= 1/k.

Also gauge couplings strengths are expected to be proportional to gK2 and thus to 1/k apart from a factor characterizing p-adic coupling constant evolution. That all basic parameters of theory would have simple expressions in terms of k would be very nice from the point of view quantum classical correspondence.

If U(1) coupling constant strength at electron length scales equals αK=1/104, this would give 1/Tp≈ 1/26. This means that photon, graviton, and gluons would be massless in an excellent approximation for say p=M89, which characterizes electroweak gauge bosons receiving their masses from their coupling to Higgs boson. For fermions one has Tp=1 so that fermionic lightlike wormhole throats would correspond to the strongest possible coupling strength αK=1/4 whereas gauge bosons identified as pairs of light-like wormhole throats associated with wormhole contacts would correspond to αK=1/104. Perhaps Tp=1/26 is the highest p-adic temperature at which gauge boson wormhole contacts are stable against splitting to fermion-antifermion pair. Fermions and possible exotic bosons created by bosonic generators of super-canonical algebra would correspond to single wormhole throat and could also naturally correspond to the maximal value of p-adic temperature since there is nothing to which they can decay.

A fascinating problem is whether k=26 defines internally consistent conformal field theory and is there something very special in it. Also the thermal stability argument for gauge bosons should be checked.

What could go wrong with this picture? The different value for the fermionic and bosonic αK makes sense only if the 4-D space-time sheets associated with fermions and bosons can be regarded as disjoint space-time regions. Gauge bosons correspond to wormhole contacts connecting (deformed pieces of CP2 type extremal) positive and negative energy space-time sheets whereas fermions would correspond to deformed CP2 type extremal glued to single space-time sheet having either positive or negative energy. These space-time sheets should make contact only in interaction vertices of the generalized Feynman diagrams, where partonic 3-surfaces are glued together along their ends. If this gluing together occurs only in these vertices, fermionic and bosonic space-time sheets are disjoint. For stringy diagrams this picture would fail.

To sum up, the resulting overall vision seems to be internally consistent and is consistent with generalized Feynman graphics, predicts exactly the spectrum of αK, allows to identify the inverse of p-adic temperature with k, allows to understand the differences between fermionic and bosonic massivation, and reduces Wick rotation to a number theoretic symmetry. One might hope that the additional objections (to be found sooner or later!) could allow to develop a more detailed picture.

For more details see the chapter p-Adic mass calculations: New Physics.

### Dark matter hierarchy corresponds to a hierarchy of quantum critical systems in modular degrees of freedom

Dark matter hierarchy corresponds to a hierarchy of conformal symmetries Zn of partonic 2-surfaces with genus g≥ 1 such that factors of n define subgroups of conformal symmetries of Zn. By the decomposition Zn=∏p|n Zp, where p|n tells that p divides n, this hierarchy corresponds to an hierarchy of increasingly quantum critical systems in modular degrees of freedom. For a given prime p one has a sub-hierarchy Zp, Zp2=Zp× Zp, etc... such that the moduli at n+1:th level are contained by n:th level. In the similar manner the moduli of Zn are sub-moduli for each prime factor of n. This mapping of integers to quantum critical systems conforms nicely with the general vision that biological evolution corresponds to the increase of quantum criticality as Planck constant increases.

The group of conformal symmetries could be also non-commutative discrete group having Zn as a subgroup. This inspires a very shortlived conjecture that only the discrete subgroups of SU(2) allowed by Jones inclusions are possible as conformal symmetries of Riemann surfaces having g≥ 1. Besides Zn one could have tedrahedral and icosahedral groups plus cyclic group Z2n with reflection added but not Z2n+1 nor the symmetry group of cube. The conjecture is wrong. Consider the orbit of the subgroup of rotational group on standard sphere of E3, put a handle at one of the orbits such that it is invariant under rotations around the axis going through the point, and apply the elements of subgroup. You obtain Riemann surface having the subgroup as its isometries. Hence all subgroups of SU(2) can act as conformal symmetries.

The number theoretically simple ruler-and-compass integers having as factors only first powers of Fermat primes and power of 2 would define a physically preferred sub-hierarchy of quantum criticality for which subsequent levels would correspond to powers of 2: a connection with p-adic length scale hypothesis suggests itself.

Spherical topology is exceptional since in this case the space of conformal moduli is trivial and conformal symmetries correspond to the entire SL(2,C). This would suggest that only the fermions of lowest generation corresponding to the spherical topology are maximally quantum critical. This brings in mind Jones inclusions for which the defining subgroup equals to SU(2) and Jones index equals to M/N =4. In this case all discrete subgroups of SU(2) label the inclusions. These inclusions would correspond to fiber space CP2→ CP2/U(2) consisting of geodesic spheres of CP2. In this case the discrete subgroup might correspond to a selection of a subgroup of SU(2)subset SU(3) acting non-trivially on the geodesic sphere. Cosmic strings X2× Y2 subset M4×CP2 having geodesic spheres of CP2 as their ends could correspond to this phase dominating the very early cosmology.

For more details see the chapter Construction of Elementary Particle Vacuum Functionals.

### Elementary particle vacuum functionals for dark matter and why fermions can have only three families

One of the open questions is how dark matter hierarchy reflects itself in the properties of the elementary p"../articles/. The basic questions are how the quantum phase q=ep(2iπ/n) makes itself visible in the solution spectrum of the modified Dirac operator D and how elementary particle vacuum functionals depend on q. Considerable understanding of these questions emerged recently. One can generalize modular invariance to fractional modular invariance for Riemann surfaces possessing Zn symmetry and perform a similar generalization for theta functions and elementary particle vacuum functionals.

In particular, without any further assumptions n=2 dark fermions have only three families. The existence of space-time correlate for fermionic 2-valuedness suggests that fermions quite generally correspond to even values of n, so that this result would hold quite generally. Elementary bosons (actually exotic p"../articles/) would correspond to n=1, and more generally odd values of n, and could have also higher families.

For more details see the chapter Construction of Elementary Particle Vacuum Functionals .

### Cold fusion - in news again

Cold fusion, whose history begins from the announcement of Fleischman and Pons 1989, is gradually making its way through the thick walls of arrogant dogmatism and prejudices, and - expressing it less diplomatically - of collective academic stupidity. The name of Frank Gordon is associated with the breakthrough experiment. Congratulations to the pioneers.

There are popular "../articles/ in Nature and New Scientist. Unfortunately these "../articles/ "../articles/ are not accessible to everyone, including me. The article Cold Fusion - Extraordinary Evidence, Cold fusion is real should be however available to any one.

For few weeks ago I revised the earlier model of cold fusion. The model explains nicely the selection rules of cold fusion and also the observed transmutations in terms of exotic states of nuclei for which the color bonds connecting A≤4 nuclei to nuclear string can be also charged. This makes possible neutral variant of deuteron nucleus making possible to overcome the Coulomb wall.

It seems that the emission of highly energetic charged p"../articles/ which cannot be due to chemical reactions and could emerge from cold fusion has been demonstrated beyond doubt by Frank Gordon's team using detectors known as CR-39 plastics of size scale of coin used already earlier in hot fusion research. The method is both cheap and simple. The idea is that travelling charged p"../articles/ shatter the bonds of the plastic's polymers leaving pits or tracks in the plastic. Under the conditions claimed to make cold fusion possible (1 deuterium per 1 Pd nucleus making in TGD based model possible the phase transition of D to its neutral variant by the emission of exotic dark W boson with interaction range of order atomic radius) tracks and pits appear during short period of time to the detector.

For details see the new chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". The older model is discussed in the chapter TGD and Nuclear Physics.

### De-coherence and the differential topology of nuclear reactions

I have already described the basic ideas of nuclear string model in the previous summaries. Nuclear string model allows a topological description of nuclear decays in terms of closed string diagrams and it is interesting to look what characteristic predictions follow without going to detailed quantitative modelling of stringy collisions possibly using some variant of string models.

In the de-coherence process explaining giant resonances eye-glass type singularities of the closed nuclear string appear and make possible nuclear decays as decays of closed string to closed strings.

1. At the level of 4He sub-strings the simplest singularities correspond to 4→ 3+1 and 4→ 2+2 eye-glass singularities. The first one corresponds to low energy GR and second to one of higher energy GRs. They can naturally lead to decays in which nucleon or deuteron is emitted in decay process. The singularities 4→ 2+1+1 resp. 4→ 1+1+1+1 correspond to eye-glasses with 3 {\it resp.} four lenses and mean the decay of 4He to deuteron and two nucleons resp. 4 nucleons. The prediction is that the emission of deuteron requires a considerably larger excitation energy than the emission of single nucleon. For GR at level of A=3 nuclei analogous considerations apply. Taking into account the possible tunnelling of the nuclear strings from the nuclear space-time sheet modifies this simple picture.

2. For GR in the scale of entire nuclei the corresponding singular configurations typically make possible the emission of alpha particle. Considerably smaller collision energies should be able to induce the emission of alpha p"../articles/ than the emission of nucleons if only stringy excitations matter. The excitation energy needed for the emission of alpha particle is predicted to increase with A since the number n of 4He nuclei increases with A. For instance, for Z=N=2n nuclei n→ n-1 +1 would require the excitation energy (2n-1)Ec=(A/2-1)Ec, Ec≈ .2 MeV. The tunnelling of the alpha particle from the nuclear space-time sheet can modify the situation.

The decay process allows a differential topological description. Quite generally, in the de-coherence process n→ (n-k) +k the color magnetic flux through the closed string must be reduced from n to n-k units through the first closed string and to k units through the second one. The reduction of the color color magnetic fluxes means the reduction of the total color binding energy from n2Ec ((n-k)2+k2 )Ec and the kinetic energy of the colliding nucleons should provide this energy.

Faraday's law, which is essentially a differential topological statement, requires the presence of a time dependent color electric field making possible the reduction of the color magnetic fluxes. The holonomy group of the classical color gauge field GAαβ is always Abelian in TGD framework being proportional to HAJαβ, where HA are color Hamiltonians and Jαβ is the induced Kähler form. Hence it should be possible to treat the situation in terms of the induced Kähler field alone. Obviously, the change of the Kähler (color) electric flux in the reaction corresponds to the change of (color) Kähler (color) magnetic flux. The change of color electric flux occurs naturally in a collision situation involving changing induced gauge fields.

For more details see the chapter Nuclear String Hypothesis .

### Strong force as scaled and dark electro-weak force?

The fiddling with the nuclear string model has led to following conclusions.

1. Strong isospin dependent nuclear force, which does not reduce to color force, is necessary in order to eliminate polyneutron and polyproton states (see this). This force contributes practically nothing to the energies of bound states. This can be understood as being due to the cancellation of isospin scalar and vector parts of this force for them. Only strong isospin singlets and their composites with isospin doublet (n,p) are allowed for A≤4 nuclei serving as building bricks of the nuclear strings. Only effective polyneutron states are allowed and they are strong isospin singlets or doublets containing charged color bonds.

2. The force could act in the length scalar of nuclear space-time sheets: k=113 nuclear p-adic length scale is a good candidate for this length scale. One must be however cautious: the contribution to the energy of nuclei is so small that length scale could be much longer and perhaps same as in case of exotic color bonds. Color bonds connecting nuclei correspond to much longer p-adic length scale and appear in three p-adically scaled up variants corresponding to A< 4 nuclei, A=4 nuclei and A> 4 nuclei.

3. The prediction of exotic deuterons with vanishing nuclear em charge leads to a simplification of the earlier model of cold fusion explaining its basic selection rules elegantly but requires a scaled variant of electro-weak force in the length scale of atom (see this and this).

What is then this mysterious strong force? And how abundant these copies of color and electro-weak force actually are? Is there some unifying principle telling which of them are realized?

From foregoing plus TGD inspired model for quantum biology involving also dark and scaled variants of electro-weak and color forces it is becoming more and more obvious that the scaled up variants of both QCD and electro-weak physics appear in various space-time sheets of TGD Universe. This raises the following questions.

1. Could the isospin dependent strong force between nucleons be nothing but a p-adically scaled up (with respect to length scale) version of the electro-weak interactions in the p-adic length scale defined by Mersenne prime M89 with new length scale assigned with gluons and characterized by Mersenne prime M107?! Strong force would be electro-weak force but in the length scale of hadron! Or possibly in length scale of nucleus (keff=107+ 6=113) if a dark variant of strong force with h= nh0=23h0 is in question!

2. Why shouldn't there be a scaled up variant of electro-weak force also in the p-adic length scale of the nuclear color flux tubes?

3. Could it be that all Mersenne primes and also other preferred p-adic primes correspond to entire standard model physics including also gravitation? Could be be kind of natural selection which selects the p-adic survivors as proposed long time ago?

Positive answers to the last questions would clean the air and have quite a strong unifying power in the rather speculative and very-many-sheeted TGD Universe.
1. The prediction for new QCD type physics at M89 would get additional support. Perhaps also LHC provides it within the next half decade.

2. Electro-weak physics for Mersenne prime M127 assigned to electron and exotic quarks and color excited leptons would be predicted. This would predict the exotic quarks appearing in nuclear string model and conform with the 15 year old leptohadron hypothesis (leptohadrons result as bound states of colored excitations of leptons (see this ) and also this ). M127 dark weak physics would also make possible the phase transition transforming ordinary deuterium in Pd target to exotic deuterium with vanishing nuclear charge.

The most obvious objection against this unifying vision is that hadrons decay only according to the electro-weak physics corresponding to M89. If they would decay according to M107 weak physics, the decay rates would be much much faster since the mass scale of electro-weak bosons would be reduced by a factor 2-9 (this would give increase of decay rates by a factor 236 from the propagator of weak boson). This is however not a problem if strong force is a dark variant with say n=8 giving corresponding to nuclear length scale. This crazy conjecture might work if one accepts the dark Bohr rules!

For more details see the chapter TGD and Nuclear Physics and the new chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

### MiniBooNE and LSND are consistent with each other in TGD Universe

MiniBooNE group has published its first findings concerning neutrino oscillations in the mass range studied in LSND experiments. For the results see the press release, the guest posting of Dr. Heather Ray in Cosmic Variance, and the more technical article A Search for Electron Neutrino in Δ m2=1 eV2 scale by MiniBooNE group.

1. The motivation for MiniBooNE

Neutrino oscillations are not well-understood. Three experiments LSND, atmospheric neutrinos, and solar neutrinos show oscillations but in widely different mass regions (1 eV2 , 3× 10-3 eV2, and 8× 10-5 eV2). This is the problem.

In TGD framework the explanation would be that neutrinos can appear in several p-adically scaled up variants with different mass scales and therefore different scales for the differences Δ m2 for neutrino masses so that one should not try to try to explain the results of these experiments using single neutrino mass scale. TGD is however not main stream physics so that colleagues stubbornly try to put all feet in the same shoe (Dear feet, I am sorry for this: I can assure that I have done my best to tell the colleagues but they do not want to listen;-)).

One can of course understand the stubbornness of colleagues. In single-sheeted space-time where colleagues still prefer to live it is very difficult to imagine that neutrino mass scale would depend on neutrino energy (space-time sheet at which topological condensation occurs using TGD language) since neutrinos interact so extremely weakly with matter. The best known attempt to assign single mass to all neutrinos has been based on the use of so called sterile neutrinos which do not have electro-weak couplings. This approach is an ad hoc trick and rather ugly mathematically.

2. The result of MiniBooNE experiment

The purpose of the MiniBooNE experiment was to check whether LSND result Δ m2=1 eV2 is genuine. The group used muon neutrino beam and looked whether the transformations of muonic neutrinos to electron neutrinos occur in the mass squared region considered. No such transitions were found but there was evidence for transformations at low neutrino energies.

What looks first as an overdiplomatic formulation of the result was

MiniBooNE researchers showed conclusively that the LSND results could not be due to simple neutrino oscillation, a phenomenon in which one type of neutrino transforms into another type and back again.

rather than direct refutation of LSND results.

3. LSND and MiniBooNE are consistent in TGD Universe

The habitant of the many-sheeted space-time would not regard the previous statement as a mere diplomatic use of language. It is quite possible that neutrinos studied in MiniBooNE have suffered topological condensation at different space-time sheet than those in LSND if they are in different energy range. To see whether this is the case let us look more carefully the experimental arrangements.

1. In LSND experiment 800 MeV proton beam entering in water target and the muon neutrinos resulted in the decay of produced pions. Muonic neutrinos had energies in 60-200 MeV range. This one can learn from the article Evidence for νμe oscillations from LSND.

2. In MiniBooNE experiment 8 GeV muon beam entered Beryllium target and muon neutrinos resulted in the decay of resulting pions and kaons. The resulting muonic neutrinos had energies the range 300-1500 GeV to be compared with 60-200 MeV! This is it! This one can learn from the article A Search for Electron Neutrino in Δ m2=1 eV2 scale by MiniBooNE group.

Let us try to make this more explicit.
1. Neutrino energy ranges are quite different so that the experiments need not be directly comparable. The mixing obeys the analog of Schrödinger equation for free particle with energy replaced with Δm2/E, where E is neutrino energy. Mixing probability as a function of distance L from the source of muon neutrinos is in 2-component model given by

P= sin2(θ)sin2(1.27Δm2L/E).

The characteristic length scale for mixing is L= E/Δm2. If L is sufficiently small, the mixing is fifty-fifty already before the muon neutrinos enter the system, where the measurement is carried out and no energy dependent mixing is detected in the length scale resolution used. If L is considerably longer than the size of the measuring system, no mixing is observed either. Therefore the result can be understood if Δm2 is much larger or much smaller than E/L, where L is the size of the measuring system and E is the typical neutrino energy.

2. MiniBooNE experiment found evidence for the appearance of electron neutrinos at low neutrino energies (below 500 MeV) which means direct support for the LSND findings and for the dependence of neutron mass scale on its energy relative to the rest system defined by the space-time sheet of laboratory.

3. Uncertainty Principle inspires the guess Lp propto 1/E implying mp propto E. Here E is the energy of the neutrino with respect to the rest system defined by the space-time sheet of the laboratory. Solar neutrinos indeed have the lowest energy (below 20 MeV) and the lowest value of Δm2. However, atmospheric neutrinos have energies starting from few hundreds of MeV and Δm2 is by a factor of order 10 higher. This suggests that the the growth of Δm2; with E2 is slower than linear. It is perhaps not the energy alone which matters but the space-time sheet at which neutrinos topologically condense. MiniBooNE neutrinos above 500 MeV would topologically could condense at space-time sheets for which the p-adic mass scale is higher than in LSND experiments and one would have Δ m2>> 1 eV2 implying maximal mixing in length scale much shorter than the size of experimental apparatus.

4. One could also argue that topological condensation occurs in condensed matter and that no topological condensation occurs for high enough neutrino energies so that neutrinos remain massless. One can even consider the possibility that the p-adic length scale Lp is proportional to E/m02, where m0 is proportional to the mass scale associated with non-relativistic neutrinos. The p-adic mass scale would obey mp propto m02/E so that the characteristic mixing length would be by a factor of order 100 longer in MiniBooNE experiment than in LSND.

To sum up, in TGD Universe LSND and MiniBooNE are consistent and provide additional support for the dependence of neutrino mass scale on neutrino energy.

For more details see the chapter Massless p"../articles/ and particle massivation.

### About the phase transition transforming ordinary deuterium to exotic deuterium in cold fusion

I have already told about a model of cold fusion based on the nuclear string model predicting ordinary nuclei to have exotic charge states. In particular, deuterium nucleus possesses a neutral exotic state which would make possible to overcome Coulomb wall and make cold fusion possible.

1. The phase transition

The exotic deuterium at the surface of Pd target seems to form patches (for a detailed summary see TGD and Nuclear Physics). This suggests that a condensed matter phase transition involving also nuclei is involved. A possible mechanism giving rise to this kind of phase would be a local phase transition in the Pd target involving both D and Pd. In the above reference it was suggested that deuterium nuclei transform in this phase transition to "ordinary" di-neutrons connected by a charged color bond to Pd nuclei. In the recent case di-neutron could be replaced by neutral D.

The phase transition transforming neutral color bond to a negatively charged one would certainly involve the emission of W+ boson, which must be exotic in the sense that its Compton length is of order atomic size so that it could be treated as a massless particle and the rate for the process would be of the same order of magnitude as for electro-magnetic processes. One can imagine two options.

1. Exotic W+ boson emission generates a positively charged color bond between Pd nucleus and exotic deuteron as in the previous model.

2. The exchange of exotic W+ bosons between ordinary D nuclei and Pd induces the transformation Z→Z+1 inducing an alchemic phase transition Pd→Ag. The most abundant Pd isotopes with A=105 and 106 would transform to a state of same mass but chemically equivalent with the two lightest long-lived Ag isotopes. 106Ag is unstable against β+ decay to Pd and 105Ag transforms to Pd via electron capture. For 106Ag (105Ag) the rest energy is 4 MeV (2.2 MeV) higher than for 106Pd (105Pd), which suggests that the resulting silver cannot be genuine.

This phase transition need not be favored energetically since the energy loaded into electrolyte could induce it. The energies should (and could in the recent scenario) correspond to energies typical for condensed matter physics. The densities of Ag and Pd are 10.49 g�cm-3 and 12.023 gcm-3 so that the phase transition would expand the volume by a factor 1.0465. The porous character of Pd would allow this. The needed critical packing fraction for Pd would guarantee one D nucleus per one Pd nucleus with a sufficient accuracy.

2. Exotic weak bosons seem to be necessary

The proposed phase transition cannot proceed via the exchange of the ordinary W bosons. Rather, W bosons having Compton length of order atomic size are needed. These W bosons could correspond to a scaled up variant of ordinary W bosons having smaller mass, perhaps even of the order of electron mass. They could be also dark in the sense that Planck constant for them would have the value h= nh0 implying scaling up of their Compton size by n. For n≈ 248 the Compton length of ordinary W boson would be of the order of atomic size so that for interactions below this length scale weak bosons would be effectively massless. p-Adically scaled up copy of weak physics with a large value of Planck constant could be in question. For instance, W bosons could correspond to the nuclear p-adic length scale L(k=113) and n=211.

For more details see the chapter TGD and Nuclear Physics and the new chapter Nuclear String Hypothesis.

### Nuclear strings and cold fusion

The option assuming that strong isospin dependent force acts on the nuclear space-time sheet and binds pn pairs to singlets such that the strong binding energy is very nearly zero in singlet state by the cancellation of scalar and vector contributions, is the most promising variant of nuclear string model. It predicts the existence of exotic di-,tri-, and tetra-neutron like p"../articles/ and even negatively charged exotics obtained from 2H, 3H,3He, and 4He by adding negatively charged color bond. For instance, 3H extends to a multiplet with em charges 4,3,2,1,0,-1,-2. Heavy nuclei with proton neutron excess could actually be such nuclei.

The exotic states are stable under beta decay for m(π)<me. The simplest neutral exotic nucleus corresponds to exotic deuteron with single negatively charged color bond. Using this as target it would be possible to achieve cold fusion since Coulomb wall would be absent. The empirical evidence for cold fusion thus supports the prediction of exotic charged states.

1. Signatures of cold fusion

In the following the consideration is restricted to cold fusion in which two deuterium nuclei react strongly since this is the basic reaction type studied.

In hot fusion there are three reaction types:

1. D+D→ 4He+γ ≈(23.8 MeV)

2. D+D → 3He+ n

3. D+D → 3H + p.

The rate for the process 1) predicted by standard nuclear physics is more than 10-3 times lower than for the processes 2) and 3). The reason is that the emission of the gamma ray involves the relatively weak electromagnetic interaction whereas the latter two processes are strong.

The most obvious objection against cold fusion is that the Coulomb wall between the nuclei makes the mentioned processes extremely improbable at room temperature. Of course, this alone implies that one should not apply the rules of hot fusion to cold fusion. Cold fusion indeed differs from hot fusion in several other aspects.

1. No gamma rays are seen.

2. The flux of energetic neutrons is much lower than expected on basis of the heat production rate an by interpolating hot fusion physics to the recent case.

These signatures can also be (and have been!) used to claim that no real fusion process occurs.

Cold fusion has also other features, which serve as valuable constraints for the model building.

1. Cold fusion is not a bulk phenomenon. It seems that fusion occurs most effectively in nano-p"../articles/ of Pd and the development of the required nano-technology has made possible to produce fusion energy in controlled manner. Concerning applications this is a good news since there is no fear that the process could run out of control.

2. The ratio x of D atoms to Pd atoms in Pd particle must lie the critical range [.85,.90] for the production of 4He to occur. This explains the poor repeatability of the earlier experiments and also the fact that fusion occurred sporadically.

3. Also the transmutations of Pd nuclei are observed.

Below a list of questions that any theory of cold fusion should be able to answer.

1. Why cold fusion is not a bulk phenomenon?

2. Why cold fusion of the light nuclei seems to occur only above the critical value x\simeq .85 of D concentration?

3. How fusing nuclei are able to effectively circumvent the Coulomb wall?

4. How the energy is transferred from nuclear degrees of freedom to much longer condensed matter degrees of freedom?

5. Why gamma rays are not produced, why the flux of high energy neutrons is so low and why the production of 4He dominates (also some tritium is produced)?

6. How nuclear transmutations are possible?

Could exotic deuterium make cold fusion possible?

One model of cold fusion has been already discussed in TGD framework. The basic idea is that only the neutrons of incoming and target nuclei can interact strongly, that is their space-time sheets can fuse. One might hope that neutral deuterium having single negatively charged color bond could allow to realize this mechanism.

1. Suppose that part of the target deuterium in Pd catalyst corresponds to exotic deuterium with neutral nuclei so that cold fusion would occur between neutral D in the target and charged incoming D and Coulomb wall in the nuclear scale would be absent. A possible mechanism giving rise to this kind of phase would be a local phase transition in the Pd target possibly involving dark matter hierarchy.

2. The exotic variant of the ordinary D + D reaction yields final states in which 4He, 3He and 3H are replaced with their exotic counterparts with charge lowered by one unit. In particular, exotic 3H is neutral and there is no Coulomb wall hindering its fusion with Pd nuclei so that nuclear transmutations can occur.
Why the neutron and gamma fluxes are low might be understood if for some reason only exotic 3H is produced, that is the production of charged final state nuclei is suppressed. The explanation relies on Coulomb wall at the nucleon level.

1. Initial state contains one charged and one neutral color bond and final state A=3 or A=4 color bonds. Additional neutral color bonds must be created in the reaction (one for the production A=3 final states and two for A=4 final state). The process involves the creation of neural fermion pairs. The emission of one exotic gluon per bond decaying to a neutral pair is necessary to achieve this. This requires that nucleon space-time sheets fuse together. Exotic D certainly belongs to the final state nucleus since charged color bond is not expected to be split in the process.

2. The process necessarily involves a temporary fusion of nucleon space-time sheets. One can understand the selection rules if only neutron space-time sheets can fuse appreciably so that only 3H would be produced. Here Coulomb wall at nucleon level should enter into the game.

3. Protonic space-time sheets have the same positive sign of charge always so that there is a Coulomb wall between them. This explains why the reactions producing exotic 4He do not occur appreciably. If the quark/antiquark at the neutron end of the color bond of ordinary D has positive charge, there is Coulomb attraction between proton and corresponding negatively charged quark. Thus energy minimization implies that the neutron space-time sheet of ordinary D has positive net charge and Coulomb repulsion prevents it from fusing with the proton space-time sheet of target D. The desired selection rules would thus be due to Coulomb wall at the nucleon level.

For more details see the chapter TGD and Nuclear Physics and the new chapter Nuclear String Hypothesis.

### Why di-neutron does not exist?

As previous postings (see this and this) should make clear, nuclear string model works amazingly well. There is however an objection against the model. This is the experimental absence of stable n-n bound state analogous to deuteron favored by lacking Coulomb repulsion and attractive electromagnetic spin-spin interaction in spin 1 state. Same applies to tri-neutron states and possibly also tetra-neutron state. There has been however speculation about the existence of di-neutron and poly-neutron states.

One can consider a simple explanation for the absence of genuine poly-neutrons.

1. The formation of negatively charged bonds with neutrons replaced by protons would minimize both nuclear mass and Coulomb energy although binding energy per nucleon would be reduced and the increase of neutron number in heavy nuclei would be only apparent. As found, this could also explain why heavy nuclei become unstable.

2. The strongest hypothesis is that mass minimization forces protons and negatively charged color bonds to serve as the basic building bricks of all nuclei. If this were the case, deuteron would be a di-proton having negatively charged color bond. The total binding energy would be only 2.222 -1.293=.9290 MeV. Di-neutron would be impossible for this option since only one color bond can be present in this state.

3. The small mass difference m(3He)-m(3H)=.018 MeV would have a natural interpretation as Coulomb interaction energy. Tri-neutron would be allowed. Alpha particle would consist of four protons and two negatively charged color bonds and the actual binding energy per nucleon would be by mn-mp/2 smaller than believed. Tetra-neutron would also consist of four-protons and binding energy per nucleon would be smaller by mn-mp than what obtains from the previous estimate. Beta decays would be basically beta decays of exotic quarks associated with color bonds.

Does this model work? I performed the calculations for the binding energies by assuming that ordinary nuclei have protons and neutral and negatively charged color bonds as building bricks.
1. The resulting picture is not satisfactory. The model with ordinary neutrons and protons and color bonds works excellently if one assumes that standard isospin dependent strong interaction is present at nuclear space-time sheets besides the color interaction mediated by much longer color magnetic flux tubes. This fits nicely with the visualization of nucleus as a kind of plant such that nuclear space-time sheet serves as a "seed" from which the long color flux tubes emanate from nucleons and return back.

2. For pn states, which are singlets with respect to strong isospin, this contribution to energy turns out to be surprisingly small, of order .1 MeV: this explains why the fit without this contribution was so good. One can obtain a complete fit for A≤4 nuclei by simple fractal scaling arguments from that for A>4 nuclei by adding this contribution.

3. If isospin dependent strong contribution is much larger in non-singlet states (expressible in terms of isospin Casimirs) one can understand the experimental absence of poly-neutrons in standard sense of the word. Since color bonds can carry em charges (0,1,-1), exotic nuclear states are however predicted. For instance, 3H with 3 color bonds in principle extends to a multiplet with charges running from +4 to -2. This seems to be an unavoidable prediction of TGD.

For more details see the chapter TGD and Nuclear Physics and the new chapter Nuclear String Hypothesis.

### Still about nuclear string hypothesis

The nuclear string model has evolved dramatically during last week or two and allows now to understand both nuclear binding energies of both A>4 nuclei and A≤4 nuclei in terms of three fractal variants of QCD. The model also explains giant resonances and so called pygmy resonances in terms of decoherence of Bose-Einstein condensates of exotic pion like color bosons to sub-condensates. In its simplicity the model is comparable to Bohr model of atom, and I cannot avoid the impression that the tragedy of theoretical nuclear physics was that it was born much before any-one new about the notion of fractality. For these reasons a second posting about these ideas involving some repetition is in order.

1. Background

Nuclear string hypothesis is one of the most dramatic almost-predictions of TGD. The hypothesis in its original form assumes that nucleons inside nucleus organize to closed nuclear strings with neighboring nuclei of the string connected by exotic meson bonds consisting of color magnetic flux tube with quark and anti-quark at its ends. The lengths of flux tubes correspond to the p-adic length scale of electron and therefore the mass scale of the exotic mesons is around 1 MeV in accordance with the general scale of nuclear binding energies. The long lengths of em flux tubes increase the distance between nucleons and reduce Coulomb repulsion.

A fractally scaled up variant of ordinary QCD with respect to p-adic length scale would be in question and the usual wisdom about ordinary pions and other mesons as the origin of nuclear force would be simply wrong in TGD framework as the large mass scale of ordinary pion indeed suggests. The presence of exotic light mesons in nuclei has been proposed also by Chris Illert based on evidence for charge fractionization effects in nuclear decays.

2. A>4 nuclei as nuclear strings consisting of A< 4 nuclei

During last weeks a more refined version of nuclear string hypothesis has evolved.

1. The first refinement of the hypothesis is that 4He nuclei and A<4 nuclei and possibly also nucleons appear as basic building blocks of nuclear strings instead of nucleons which in turn can be regarded as strings of nucleons. Large number of stable lightest isotopes of form A=4n supports the hypothesis that the number of 4He nuclei is maximal. Even the weak decay characteristics might be reduced to those for A<4 nuclei using this hypothesis.

2. One can understand the behavior of nuclear binding energies surprisingly well from the assumptions that total strong binding energy associated with A≤ 4 building blocks is additive for nuclear strings and that the addition of neutrons tends to reduce Coulombic energy per string length by increasing the length of the nuclear string implying increase binding energy and stabilization of the nucleus.

3. In TGD framework tetra-neutron is interpreted as a variant of alpha particle obtained by replacing two meson-like stringy bonds connecting neighboring nucleons of the nuclear string with their negatively charged variants. For heavier nuclei tetra-neutron is needed as an additional building brick and the local maxima of binding energy E_B per nucleon as function of neutron number are consistent with the presence of tetra-neutrons. The additivity of magic numbers 2, 8, 20, 28, 50, 82, 126 predicted by nuclear string hypothesis is also consistent with experimental facts and new magic numbers are predicted.

3. Bose-Einstein condensation of color bonds as a mechanism of nuclear binding

The attempt to understand the variation of the nuclear binding energy and its maximum for Fe leads to a quantitative model of nuclei lighter than Fe as color bound Bose-Einstein condensates of 4He nuclei or rather, of pion like colored states associated with color flux tubes connecting 4He nuclei.

1. The crucial element of the model is that color contribution to the binding energy is proportional to n2 where n is the number of color bonds. Fermi statistics explains the reduction of EB for the nuclei heavier than Fe. Detailed estimate favors harmonic oscillator model over free nucleon model with oscillator strength having interpretation in terms of string tension.

2. Fractal scaling argument allows to understand 4He and lighter nuclei as strings formed from nucleons with nucleons bound together by color bonds. Three fractally scaled variants of QCD corresponding A>4 nuclei, A=4 nuclei and A<4 nuclei are thus involved. The binding energies of also lighter nuclei are predicted surprisingly accurately by applying simple p-adic scaling to the parameters of model for the electromagnetic and color binding energies in heavier nuclei.

4. Giant dipole resonance as de-coherence of Bose-Einstein condensate of color bonds

Giant (dipole) resonances and so called pygmy resonances interpreted in terms of de-coherence of the Bose-Einstein condensates associated with A≤ 4 nuclei and with the nuclear string formed from A≤ 4 nuclei provide a unique test for the model. The key observation is that the splitting of the Bose-Einstein condensate to pieces costs a precisely defined energy due to the n2 dependence of the total binding energy.

1. For 4He de-coherence the model predicts singlet line at 12.74 MeV and triplet (25.48, 27.30,29.12) MeV at ≈ 27 MeV spanning 4 MeV wide range which is of the same order as the width of the giant dipole resonance for nuclei with full shells.

2. The de-coherence at the level of nuclear string predicts 1 MeV wide bands 1.4 MeV above the basic lines. Bands decompose to lines with precisely predicted energies. Also these contribute to the width. The predictions are in a surprisingly good agreement with experimental values. The so called pygmy resonance appearing in neutron rich nuclei can be understood as a de-coherence for A=3 nuclei. A doublet (7.520,8.4600) MeV at ≈ 8 MeV is predicted. At least the prediction for the position is correct.

I am grateful for Elio Conte for discussions which stimulated a more detailed consideration of nuclear string model.

For more details see the chapter TGD and Nuclear Physics and the new chapter Nuclear String Hypothesis.

### Experimental evidence for colored muons

One of the basic deviations of TGD from standard model is the prediction of colored excitations of quarks and leptons. The reason is that color is not spin like quantum number but partial wave in CP2 degrees of freedom and thus angular momentum like. Accordingly new scaled variants of QCD are predicted. As a matter fact, dark matter hierarchy and p-adic length scale hierarchy populate many-sheeted Universe with fractal variants of standard model physics.

In the blog of Lubos there were comments about a new particle. The finding has been published (Phys. Rev. D74) and (Phys. Rev. Lett. 98). The mass of the new particle, which is either scalar or pseudoscalar, is 214.4 MeV whereas muon mass is 105.6 MeV. The mass is about 1.5 per cent higher than two times muon mass. The proposed interpretation is as light Higgs. I do not immediately resonate with this interpretation although p-adically scaled up variants of also Higgs bosons live happily in the fractal Universe of TGD.

For decades ago anomalous production of electron-positron pairs in heavy ion nuclear collisions just above the Coulomb wall was discovered with the mass of the pseudocalar resonance slightly above 2me. All this have been of course forgotten since it is just boring low energy phenomenology to which brave brane theorists do not waste their precious time;-). This should however put bells ringing.

TGD explanation is in terms of exotic pions consisting of colored variants of ordinary electrons predicted by TGD. I of course predicted that also muon and tau would give rise to a scaled variant of QCD type theory. Karmen anomaly gave indications that muonic variant of this QCD is there.

Just now I am working with nuclear string model where scaled variant of QCD for exotic quarks in p-adic length scale of electron is responsible for the binding of 4He nuclei to nuclear strings. One cannot exclude the possibility that the fermion and antifermion at the ends of color flux tubes connecting nucleons are actually colored leptons although the working hypothesis is that they are exotic quark and antiquark. One can of course also turn around the argument: could it be that lepto-pions are "leptonuclei", that is bound states of ordinary leptons bound by color flux tubes for a QCD in length scale considerably shorter than the p-adic length scale of lepton.

This QCD binds 4He nuclei to tangled nuclear strings. Two other scaled variants of QCD bind nucleons to 4He and lighter nuclei. The model is extremely simple and quantitatively amazingly successful. For instance, the last discovery is that the energies of giant dipole resonances can be predicted and first inspection shows that they come out correctly.

For more details about the lepto-hadron hypothesis see the chapter The Recent Status of Lepto-Hadron Hypothesis. For the recent state of nuclear string model see the new chapter Further progress in Nuclear String Hypothesis.

### Further progress related to nuclear string hypothesis

Nuclear string hypothesis is one of the most dramatic almost-predictions of TGD. The hypothesis assumes that nucleons inside nucleus organize to closed nuclear strings with neighboring nuclei of the string connected by exotic meson bonds consisting of color magnetic flux tube with quark and anti-quark at its ends. The lengths of flux tubes correspond to the p-adic length scale of electron and therefore the mass scale of the exotic mesons is around 1 MeV in accordance with the general scale of nuclear binding energies. The long lengths of em flux tubes increase the distance between nucleons and reduce Coulomb repulsion. A fractally scaled up variant of ordinary QCD with respect to p-adic length scale would be in question and the usual wisdom about ordinary pions and other mesons as the origin of nuclear force would be simply wrong in TGD framework as the large mass scale of ordinary pion indeed suggests. The presence of exotic light mesons in nuclei has been proposed also by Chris Illert based on evidence for charge fractionization effects in nuclear decays.

Nuclear string hypothesis leads to rather detailed predictions and allows to understand the behavior of nuclear binding energies surprisingly well from the assumptions that total strong binding energy is additive for nuclear strings and that the addition of neutrons tends to reduce Coulombic energy per string length by increasing the length of the nuclear string implying increase binding energy and stabilization of the nucleus. Perhaps even also weak decay characteristics could be understood in a simple manner by assuming that the stable nuclei lighter than Ca contain maximum number of alpha p"../articles/ plus minimum number of lighter isotopes. Large number of stable lightest isotopes of form A=4n supports this hypothesis.

In TGD framework tetra-neutron is interpreted as a variant of alpha particle obtained by replacing two meson-like stringy bonds connecting neighboring nucleons of the nuclear string with their negatively charged variants (see this). For heavier nuclei tetra-neutron is needed as an additional building brick and the local maxima of binding energy E_B per nucleon as function of neutron number are consistent with the presence of tetra-neutrons. The additivity of magic numbers 2, 8, 20, 28, 50, 82, 126 predicted by nuclear string hypothesis is also consistent with experimental facts and new magic numbers are predicted and there is evidence for them.

The attempt to understand the variation of the nuclear binding energy and its maximum for Fe leads to a quantitative model of nuclei lighter than Fe as color bound Bose-Einstein condensates of 4He nuclei or rather, of color flux tubes defining meson-like structures connecting them. Fermi statistics explains the reduction of EB for the nuclei heavier than Fe. Detailed estimate favors harmonic oscillator model over free nucleon model with oscillator strength having interpretation in terms of string tension.

Fractal scaling argument allows to understand 4He and lighter nuclei analogous states formed from nucleons and binding energies are predicted quite satisfactorily. Giant dipole resonance interpreted as a de-coherence of the Bose-Einstein condensate to pieces provides a unique test for the model and precise predictions for binding energies follow.

I am grateful for Elio Conte for discussions which stimulated a more detailed consideration of nuclear string model.

For more details see the chapter TGD and Nuclear Physics and the new chapter Further Progress in Nuclear String Hypothesis.

### Could also gauge bosons correspond to wormhole contacts?

The developments in the formulation of quantum TGD which have taken place during the period 2005-2007 (see this, this, and this) suggest dramatic simplifications of the general picture about elementary particle spectrum. p-Adic mass calculations (see this, this, and this) leave a lot of freedom concerning the detailed identification of elementary p"../articles/. The basic open question is whether the theory is free at parton level as suggested by the recent view about the construction of S-matrix and by the almost topological QFT property of quantum TGD at parton level (see this and this). Or more concretely: do partonic 2-surfaces carry only free many-fermion states or can they carry also bound states of fermions and anti-fermions identifiable as bosons?

What is known that Higgs boson corresponds naturally to a wormhole contact (see this). The wormhole contact connects two space-time sheets with induced metric having Minkowski signature. Wormhole contact itself has an Euclidian metric signature so that there are two wormhole throats which are light-like 3-surfaces and would carry fermion and anti-fermion number in the case of Higgs. Irrespective of the identification of the remaining elementary p"../articles/ MEs (massless extremals, topological light rays) would serve as space-time correlates for elementary bosons. Higgs type wormhole contacts would connect MEs to the larger space-time sheet and the coherent state of neutral Higgs would generate gauge boson mass and could contribute also to fermion mass.

The basic question is whether this identification applies also to gauge bosons (certainly not to graviton). This identification would imply quite a dramatic simplification since the theory would be free at single parton level and the only stable parton states would be fermions and anti-fermions. As will be found this identification allows to understand the dramatic difference between graviton and other gauge bosons and the weakness of gravitational coupling, gives a connection with the string picture of gravitons, and predicts that stringy states are directly relevant for nuclear and condensed matter physics as has been proposed already earlier (see this, this, and this).

1. Option I: Only Higgs as a wormhole contact

The only possibility considered hitherto has been that elementary bosons correspond to partonic 2-surfaces carrying fermion-anti-fermion pair such that either fermion or anti-fermion has a non-physical polarization. For this option CP2 type extremals condensed on MEs and travelling with light velocity would serve as a model for both fermions and bosons. MEs are not absolutely necessary for this option. The couplings of fermions and gauge bosons to Higgs would be very similar topologically. Consider now the counter arguments.

1. This option fails if the theory at partonic level is free field theory so that anti-fermions and elementary bosons cannot be identified as bound states of fermion and anti-fermion with either of them having non-physical polarization.

2. Mathematically oriented mind could also argue that the asymmetry between Higgs and elementary gauge bosons is not plausible whereas asymmetry between fermions and gauge bosons is. Mathematician could continue by arguing that if wormhole contacts with net quantum numbers of Higgs boson are possible, also those with gauge boson quantum numbers are unavoidable.

3. Physics oriented thinker could argue that since gauge bosons do not exhibit family replication phenomenon (having topological explanation in TGD framework) there must be a profound difference between fermions and bosons.

2. Option II: All elementary bosons as wormhole contacts

The hypothesis that quantum TGD reduces to a free field theory at parton level is consistent with the almost topological QFT character of the theory at this level. Hence there are good motivations for studying explicitly the consequences of this hypothesis.

2.1 Elementary bosons must correspond to wormhole contacts if the theory is free at parton level

Also gauge bosons could correspond to wormhole contacts connecting MEs (see this) to larger space-time sheet and propagating with light velocity. For this option there would be no need to assume the presence of non-physical fermion or anti-fermion polarization since fermion and anti-fermion would reside at different wormhole throats. Only the definition of what it is to be non-physical would be different on the light-like 3-surfaces defining the throats.

The difference would naturally relate to the different time orientations of wormhole throats and make itself manifest via the definition of light-like operator o=xkγk appearing in the generalized eigenvalue equation for the modified Dirac operator (see this and this). For the first throat ok would correspond to a light-like tangent vector tkof the partonic 3-surface and for the second throat to its M4 dual tdk in a preferred rest system in M4 (implied by the basic construction of quantum TGD). What is nice that this picture non-asks the question whether tkor tdkshould appear in the modified Dirac operator.

Rather satisfactorily, MEs (massless extremals, topological light rays) would be necessary for the propagation of wormhole contacts so that they would naturally emerge as classical correlates of bosons. The simplest model for fermions would be as CP2 type extremals topologically condensed on MEs and for bosons as pieces of CP2 type extremals connecting ME to the larger space-time sheet. For fermions topological condensation is possible to either space-time sheet.

2.2 Phase conjugate states and matter-antimatter asymmetry

By fermion number conservation fermion-boson and boson-boson couplings must involve the fusion of partonic 3-surfaces along their ends identified as wormhole throats. Bosonic couplings would differ from fermionic couplings only in that the process would be 2→ 4 rather than 1→ 3 at the level of throats.

The decay of boson to an ordinary fermion pair with fermion and anti-fermion at the same space-time sheet would take place via the basic vertex at which the 2-dimensional ends of light-like 3-surfaces are identified. The sign of the boson energy would tell whether boson is ordinary boson or its phase conjugate (say phase conjugate photon of laser light) and also dictate the sign of the time orientation of fermion and anti-fermion resulting in the decay.

Also a candidate for a new kind interaction vertex emerges. The splitting of bosonic wormhole contact would generate fermion and time-reversed anti-fermion having interpretation as a phase conjugate fermion. This process cannot correspond to a decay of boson to ordinary fermion pair. The splitting process could generate matter-antimatter asymmetry in the sense that fermionic antimatter would consist dominantly of negative energy anti-fermions at space-time sheets having negative time orientation (see this and this).

This vertex would define the fundamental interaction between matter and phase conjugate matter. Phase conjugate photons are in a key role in TGD based quantum model of living matter. This involves a model for memory as communications in time reversed direction, mechanism of intentional action involving signalling to geometric past, and mechanism of remote metabolism involving sending of negative energy photons to the energy reservoir (see this). The splitting of wormhole contacts has been considered as a candidate for a mechanism realizing Boolean cognition in terms of "cognitive neutrino pairs" resulting in the splitting of wormhole contacts with net quantum numbers of Z0 boson (see this).

3. Graviton and other stringy states

Fermion and anti-fermion can give rise to only single unit of spin since it is impossible to assign angular momentum with the relative motion of wormhole throats. Hence the identification of graviton as single wormhole contact is not possible. The only conclusion is that graviton must be a superposition of fermion-anti-fermion pairs and boson-anti-boson pairs with coefficients determined by the coupling of the parton to graviton. Graviton-graviton pairs might emerge in higher orders. Fermion and anti-fermion would reside at the same space-time sheet and would have a non-vanishing relative angular momentum. Also bosons could have non-vanishing relative angular momentum and Higgs bosons must indeed possess it.

Gravitons are stable if the throats of wormhole contacts carry non-vanishing gauge fluxes so that the throats of wormhole contacts are connected by flux tubes carrying the gauge flux. The mechanism producing gravitons would the splitting of partonic 2-surfaces via the basic vertex. A connection with string picture emerges with the counterpart of string identified as the flux tube connecting the wormhole throats. Gravitational constant would relate directly to the value of the string tension.

The TGD view about coupling constant evolution (see this) predicts G propto Lp2, where Lp is p-adic length scale, and that physical graviton corresponds to p=M127=2127-1. Thus graviton would have geometric size of order Compton length of electron which is something totally new from the point of view of usual Planck length scale dogmatism. In principle an entire p-adic hierarchy of gravitational forces is possible with increasing value of G.

The explanation for the small value of the gravitational coupling strength serves as a test for the proposed picture. The exchange of ordinary gauge boson involves the exchange of single CP2 type extremal giving the exponent of Kähler action compensated by state normalization. In the case of graviton exchange two wormhole contacts are exchanged and this gives second power for the exponent of Kähler action which is not compensated. It would be this additional exponent that would give rise to the huge reduction of gravitational coupling strength from the naive estimate G ≈ Lp2.

Gravitons are obviously not the only stringy states. For instance, one obtains spin 1 states when the ends of string correspond to gauge boson and Higgs. Also non-vanishing electro-weak and color quantum numbers are possible and stringy states couple to elementary partons via standard couplings in this case. TGD based model for nuclei as nuclear strings having length of order L(127) (see this) suggests that the strings with light M127quark and anti-quark at their ends identifiable as companions of the ordinary graviton are responsible for the strong nuclear force instead of exchanges of ordinary mesons or color van der Waals forces.

Also the TGD based model of high Tc super-conductivity involves stringy states connecting the space-time sheets associated with the electrons of the exotic Cooper pair (see this and this). Thus stringy states would play a key role in nuclear and condensed matter physics, which means a profound departure from stringy wisdom, and breakdown of the standard reductionistic picture.

4. Spectrum of non-stringy states

The 1-throat character of fermions is consistent with the generation-genus correspondence. The 2-throat character of bosons predicts that bosons are characterized by the genera (g1,g2) of the wormhole throats. Note that the interpretation of fundamental fermions as wormhole contacts with second throat identified as a Fock vacuum is excluded.

The general bosonic wave-function would be expressible as a matrix Mg1,g2 and ordinary gauge bosons would correspond to a diagonal matrix Mg1,g2g1,g2 as required by the absence of neutral flavor changing currents (say gluons transforming quark genera to each other). 8 new gauge bosons are predicted if one allows all 3× 3 matrices with complex entries orthonormalized with respect to trace meaning additional dynamical SU(3) symmetry. Ordinary gauge bosons would be SU(3) singlets in this sense. The existing bounds on flavor changing neutral currents give bounds on the masses of the boson octet. The 2-throat character of bosons should relate to the low value T=1/n<< 1 for the p-adic temperature of gauge bosons as contrasted to T=1 for fermions.

If one forgets the complications due to the stringy states (including graviton), the spectrum of elementary fermions and bosons is amazingly simple and almost reduces to the spectrum of standard model. In the fermionic sector one would have fermions of standard model. By simple counting leptonic wormhole throat could carry 23=8 states corresponding to 2 polarization states, 2 charge states, and sign of lepton number giving 8+8=16 states altogether. Taking into account phase conjugates gives 16+16=32 states.

In the non-stringy boson sector one would have bound states of fermions and phase conjugate fermions. Since only two polarization states are allowed for massless states, one obtains (2+1)× (3+1)=12 states plus phase conjugates giving 12+12=24 states. The addition of color singlet states for quarks gives 48 gauge bosons with vanishing fermion number and color quantum numbers. Besides 12 electro-weak bosons and their 12 phase conjugates there are 12 exotic bosons and their 12 phase conjugates. For the exotic bosons the couplings to quarks and leptons are determined by the orthogonality of the coupling matrices of ordinary and boson states. For exotic counterparts of Wbosons and Higgs the sign of the coupling to quarks is opposite. For photon and Z0 also the relative magnitudes of the couplings to quarks must change. Altogether this makes 48+16+16=80 states. Gluons would result as color octet states. Family replication would extend each elementary boson state into SU(3)octet and singlet and elementary fermion states into SU(3)triplets.

5. Higgs mechanism

Consider next the generation of mass as a vacuum expectation value of Higgs when also gauge bosons correspond to wormhole contacts. The presence of Higgs condensate should make the simple rectilinear ME curved so that the average propagation of fields would occur with a velocity less than light velocity. Field equations allow MEs of this kind as solutions (see this).

The finite range of interaction characterized by the gauge boson mass should correlate with the finite range for the free propagation of wormhole contacts representing bosons along corresponding ME. The finite range would result from the emission of Higgs like wormhole contacts from gauge boson like wormhole contact leading to the generation of coherent states of neutral Higgs p"../articles/. The emission would also induce non-rectilinearity of ME as a correlate for the recoil in the emission of Higgs.

For more details see either the chapter Construction of Elementary Particle Vacuum Functionals or the chapter Massless states and Particle Massivation.

### Can one deduce the Yukawa couplings of Higgs from the anomalous ratio H/Z0(b pair):H/Z0(tau pair)?

As I told in previous posting, there have been cautious claims (see New Scientist article, the postings in the blog of Tommaso Dorigo, and the postings of John Conway in Cosmic Variance) about the possible detection of first Higgs events.

According to simple argument of John Conway based on branching ratios of Z0 and standard model Higgs to τ-τbar and b-bbar, Z0→ τ-τbar excess predicts that the ratio of Higgs events to Z0 events for Z0→ b-bbar is related by a scaling factor

[B(H→ b-bbar)/B(H→ τ-τbar)]:[B(Z0→ b-bbar)/B(Z0→ τ-τbar)] ≈ 10/5.6=1.8

to that in Z0→ τ-τbar case. The prediction seems to be too high which raises doubts against the identification of the excecss in terms of Higgs.

In a shamelessly optimistic mood and forgetting that mere statistical fluctuations might be in question, one might ask whether the inconsistency of τ-τbar and b-bbar excesses could be understood in TGD framework.

1. The couplings of Higgs to fermions need not scale as mass in TGD framework. Rather, the simplest guess is that the Yukawa couplings scale like p-adic mass scale m(k)=1/L(k), where L(k) is the p-adic length scale of fermion. Fermionic masses can be written as m(F)= x(F)/L(k), where the numerical factor x(F)>1 depends on electro-weak quantum numbers and is different for quarks and leptons. If the leading contribution to the fermion mass comes from p-adic thermodynamics, Yukawa couplings in TGD framework can be written as h(F)= ε(F) m(F)/x(F), ε<< 1. The parameter ε should be same for all quarks resp. leptons but need not be same for leptons and quarks so that that one can write ε (quark)= εQ and ε (lepton)= εL. This is obviously an important feature distinguishing between Higgs decays in TGD and standard model.

2. The dominating contribution to the mass highest generation fermion which in absence of topological mixing correspond to genus g=2 partonic 2-surface comes from the modular degrees of freedom and is same for quarks and leptons and does not depend on electro-weak quantum numbers at all (p-adic length scale is what matters only). Topological mixing inducing CKM mixing affects x(F) and tends to reduce x(τ), x(b), and x(t).

3. In TGD framework the details of the dynamics leading to the final states involving Z0 bosons and Higgs bosons are different since one expects that it fermion-Higgs vertices suppressed to the degree that weak-boson-Higgs vertices could dominate in the production of Higgs. Since these details should not be relevant for the experimental determination of Z0→ τ-τbar and Z0→ b-bbar distributions, then the above argument can be modified in a straightforward manner by looking how the branching ratio R(b-bbar)/R(τ-τbar) is affected by the modification of Yukawa couplings for b and τ. What happens is following:

B(H→ b-bbar)/B(H→ τ-τbar)= mb2/mτ2 → B(H→ b-bbar)/B(H→ τ-τbar)×X ,

X=(ε2(q)/ε2 (L))× (xτ2/xb2).

Generalizing the simple argument of Conway one therefore has

(H/Z)0(b-bbar)= 1.8 × (ε2Q2L )×(xτ2/xb2)× (H/Z)0(τ-τbar).

Since the topological mixing of both charged leptons and quarks of genus 2 with lower genera is predicted to be very small (see this) , xτ/xb≈ 1 is expected to hold true. Hence the situation is not improved unless one has εQL<1 meaning that the coupling of Higgs to the p-adic mass scale would be weaker for quarks than for leptons.

Can one then guess then value of r and perhaps even Yukawa coupling from general arguments?

1. The actual value of r should relate to electro-weak physics at very fundamental level. The ratio r=1/3 of Kähler couplings of quarks and leptons is certainly this kind of number. This would reduce the prediction for (H/Z0)(b-bbar) by a factor of 1/9. To my best understanding, this improves the situation considerably (see for yourself).

2. Kähler charge QK equals electro-weak U(1) charge QU(1). Furthermore, Kähler coupling strength which is RG invariant equals to U(1) coupling strength at the p-adic length scale of electron but not generally (see this). This observation encourages the guess that, apart from a numerical factor of order unity, ε2 itself is given by either αKQK2 and thus RG invariant or by αU(1)QU(1)2. The contribution of Higgs vacuum expectation to fermionic mass would be roughly a fraction 10-2-10-3 about fermion mass in consistency with p-adic mass calculations.

Of course, it might turn out that fake Higgs is in question. What is however important is that the deviation of the Yukawa coupling allowed by TGD for Higgs from those predicted by standard model could manifest itself in the ratio of Z0→ b-bbar and Z0→ τ-τbar excesses.

For more details see the chapter Massless p"../articles/ and particle massivation.

### Indications for Higgs with mass of 160 GeV

There have been cautious claims (see New Scientist article, the postings in the blog of Tommaso Dorigo, and the postings of John Conway in Cosmic Variance) about the possible detection of first Higgs events.

This inspires more precise considerations of the experimental signatures of TGD counterpart of Higgs. This kind of theorizing is of course speculative and remains on general qualitative level only since no calculational formalism exists and one must assume that gauge field theory provides an approximate description of the situation.

Has Higgs been detected?

The indications for Higgs comes from two sources. In both cases Higgs would have been produced as gluons decay to two b-bbar pairs and virtual b-bbar pair fuses to Higgs, which then decays either to tau-lepton pair or b-quark pair.

John Conway, the leader of CDF team analyzing data from Tevatron, has reported about a slight indication for Higgs with mass mH=160 GeV as a small excess of events in the large bump produced by the decays of Z0 bosons with mass of mZ≈ 94 GeV to tau-taubar pairs in the blog Cosmic Variance. These events have 2σ significance level meaning that the probability that they are statistical fluctuations is about 2 per cent.

The interpretation suggested by Conway is as Higgs of minimal super-symmetric extension of standard model (MSSM). In MSSM there are two complex Higgs doublets and this predicts three neutral Higgs p"../articles/ denoted by h, H, and A. If A is light then the rate for the production of Higgs bosons is proportional to the parameter tan(β) define as the ratio of vacuum expectation values of the two doublets. The rate for Higgs production is by a factor tan(β)2 higher than in standard model and this has been taken as a justification for the identification as MSSM Higgs (the proposed value is tan(β)≈ 50). If the identification is correct, about recorded 100 Higgs candidates should already exist so that this interpretation can be checked.

Also Tommaso Dorigo, the blogging member of second team analyzing CDF results, has reported at his blog site a slight evidence for an excess of b-bbar pairs in Z0→ b-bbar decays at the same mass mH=160 GeV. The confidence level is around 2 sigma. The excess could result from the decays of Higgs to b-bbar pair associated with b-bbar production.

What forces to take these reports with some seriousness is that the value of mH is same in both cases. John Conway has however noticed that if both signals correspond to Higgs then it is possible to deduce estimate for the number of excess events in Z0→ b-bbar peak from the excess in tau-taubar peak. The predicted excess is considerably larger than the real excess. Therefore a statistical fluke could be in question, or staying in an optimistic mood, there is some new particle there but it is not Higgs.

mH=160 GeV is not consistent with the standard model estimate by D0 collaboration for the mass of standard model Higgs boson mass based on high precision measurement of electro-weak parameters sin(θW), α, αs , mt and mZ depending on log(mH) via the radiative corrections. The best fit is in the range 96-117 GeV. The upper bound from the same analysis for Higgs mass is 251 GeV with 95 per cent confidence level. The estimate mt=178.0+/- 4.3 GeV for the mass of top quark is used. The range for the best estimate is not consistent with the lower bound of 114 GeV on mH coming from the consistency conditions on the renormalization group evolution of the effective potential V(H) for Higgs (see the illustration here). Here one must of course remember that the estimates vary considerably.

Since TGD cannot yet be coded to precise Feynman rules, the comparison of TGD to standard model is not possible without some additional assumptions. It is assumed that p-adic coupling constant evolution reduces in a reasonable approximation to the coupling constant evolution predicted by a gauge theory so that one can apply at qualitative level the basic wisdom about the effects of various couplings of Higgs to the coupling constant evolution of the self coupling λ of Higgs giving upper and lower bounds for the Higgs mass. This makes also possible to judge the determinations of Higgs mass from high precision measurements of electro-weak parameters in TGD framework.

In TGD framework the Yukawa coupling of Higgs to fermions can be much weaker than in standard model. This has several implications.

1. The rate for the production of Higgs via channels involving fermions is much lower. This could explain why Higgs has not been observed even if it had mass around 100 GeV.

2. The radiative corrections to electro-weak parameters coming from fermion-Higgs vertices are much smaller than in standard model and cannot be used to deduce Higgs mass from the high precision measurements of electro-weak parameters. Hence one cannot anymore localize Higgs mass to the range 96-117 GeV.

3. In standard model the large Yukawa coupling of Higgs to top, call it h, tends to reduce the quartic self coupling constant λ for Higgs in ultraviolet. The condition that the minimum for Higgs potential is not transformed to a maximum gives a lower bound on the initial value of λ and thus to the value of mH. In TGD framework the weakness of fermionic couplings implies that there is no lower bound to Higgs mass.

4. The weakness of Yukawa couplings means that self coupling of Higgs tends to increase λ faster than in standard model. Note also that when Yukawa coupling ht to top is small (ht2<λ, see arXiv:hep-ph/9409458), its contribution tends to increase the value of βλ. Thus the upper bound from perturbative unitarity to the scalar coupling λ (and mH) is reduced. This would force the value of Higgs mass to be even lower than in standard model.

In TGD framework new physics can however emerge in the length scales corresponding to Mersenne primes Mn=2n-1. Ordinary QCD corresponds to M107 and one cannot exclude even M89 copy of QCD. M61 would define the next candidate. The quarks of M89 QCD would give to the beta function βλ a negative contribution tending to reduce the value λ so that unitary bound would not be violated. If this new physics is accepted mH=160 GeV can be considered.

Can one then identify the Higgs candidate with mH=160 with the TGD variant of standard model Higgs? This is far from clear.

1. Even in standard model the rate for the production of Higgs is low. In TGD the rate for the production of the counterpart of standard model Higgs is reduced since the coupling of quarks to Higgs is expected to be much smaller than in standard model. This might exclude the interpretation as Higgs.

2. The slow rate for the production of Higgs could also allow the presence of Higgs at much lower mass and explain why Higgs has not been detected in the mass range mH 114 GeV. Interestingly, around 1990 a 2σ evidence for Higgs with mass about 100 GeV was reported and one might wonder whether there might be genuine Higgs there after all.

3. In TGD framework one can consider also other interpretations of the excess events at 160 GeV (taking the findings of both Dorigo's and Conway's group seriously and the fact that they do not seem to be consistent). p-Adically scaled up variants of ordinary quarks which might have something to do with the bumpy nature of top quark mass distribution.

M89 hadron physics might be required in TGD framework by the requirement of perturbative unitarity. Thus the mesons of M89 hadron physics might be involved. By a very naive scaling by factor 2(107-89)/2=29 the mass of the pion of M89 physics would be about 70 GeV. This estimate is not reliable since color spin-spin splittings distinguishing between pion and ρ mass do not scale naively. For M89 mesons this splitting should be very small since color magnetic moments are very small. The mass of pion in absence of splitting would be around 297 MeV and 512-fold scaling gives M(p89) 152 GeV which is not too far from 160 GeV. Could the decays of this exotic pion give rise to the excess of fermion pairs? This interpretation might also allow to understand why b-pair and t-pair excesses are not consistent. Monochromatic photon pairs with photon energy around 76 GeV would be the probably easily testable experimental signature of this option.

For more details see the chapter <Massless p"../articles/ and particle massivation.