What's new in

HYPER-FINITE FACTORS, P-ADIC LENGTH SCALE HYPOTHESIS, AND DARK MATTER HIERARCHY

Note: Newest contributions are at the top!



Year 2017

Misbehaving Ruthenium atoms

The understanding of dark matter in TGD sense has been evolving rapidly recently. Dark matter at magnetic flux tubes appears to be part of ordinary chemistry and even more a part of organic chemistry. Non-equilibrium-thermodynamics has popped up also as a natural application of tensor networks formed from flux tubes carrying dark matter perform quantum phase transitions. The ideas about how to generate systems with life-like properties are getting rather precise. Dark matter and flux tubes are suddenly everywhere! Also this piece of text relates to this revolution.

In FB I received a link to a highly interesting article. The title of the article was "Breakthrough could launch organic electronics beyond cell phone screens" and is tailored to catch the attention of techno-oriented leader. My attention was however caught for different reasons. The proposed technology would rely on the observation that Ruthenium atoms do not behave as they are expected to behave.

Ru atoms appear as dimers of two Ru atoms in the system considered. Free Ru atoms with one valence electron are however needed: they would become ions by giving up their valence electrons, and these electrons would serve as current carriers making the organic material in question semi-conductor. Irradiation by UV light was found to split Ruthenium dimers to single Ru atoms. If the total energy of Ru dimer is smaller than that for two Ru atoms, thermodynamics predicts that the Ru atoms recombine to dimers after the irradiation ceases. The did not however happen!

Can one understand the mystery in TGD framework?

  1. Ruthenium atoms have one outer s-electron at 5:th shell. One would expect that Ru dimer has valence bond with shared 5s electrons. I recently learned about mysteriously disappearing valence electrons of rare Earth metals caused by heating. This gives strong support for the idea that valence electrons of free atoms can become dark in TGD sense: that is their Planck constant increases and the orbitals become large. The analogy with Rydberg atoms is obvious and it could be that Rydberg atoms in some case have dark valence electrons. Since electron's binding energy scale scales like 1/heff2, the creation of these states requires energy and therefore heating is required. Also irradiation by photons with energy equal to energy difference between ordinary and dark states should give rise to the same phenomenon. This would provide a manner to create dark electrons and a new technology.
  2. This also inspired the proposal that valence bond (thought to be understood in chemistry with inspiration coming from the reductionistic dogma) involves flux tube pair and heff/h= n which is larger than for ordinary quantum theory. This provides new very concrete support for the view that the transitions from atomic physics to chemistry and from chemistry to organic chemistry could involve new physics provided by TGD.

    The step from atomic physics to chemistry with valence bond would involve new physics: the delocalization of valence electrons to flux tubes due to the increase of heff! Valence electrons would be dark matter in TGD sense! The step from chemistry to organic chemistry would involve delocalization of proton as dark proton by similar mechanism and give rise to hydrogen bond and also many other new phenomena.

  3. The increase of heff would reduce the binding energy from the expected. This would be the case for so called ( and somewhat mysterious) high energy phosphate bond. This picture conforms with the fact that biological energy storage indeed relies on valence bonds.

    If this vision is correct, the breaking of valence bond would split the flux tube pair between two Ru atoms by reconnection to flux loops associated with Ru atom. The resulting pair of free Ru atoms would have lower energy than Ru dimer and would be favored by thermodynamics. The paradox would disappear.

A couple of critical questions are in order.
  1. Why irradiation would be needed at all? Irradiation would kick the dimer system over a potential wall separating it from a state two free Ru atoms. Also the magnetic energy of the flux tube would contribute to the energy of dimer and make it higher than that of free state.
  2. Why Ru dimers would not decay spontanously to pairs of free Ru atoms? This is the case if the energy needed to overcome the potential wall is higher than thermal energy at temperatures considered. One could also argue that electronic states with different values of heff/h=n are not in thermal equilibrium: one has far-from-equilibrium thermodynamical state. These electrons would indeed represent dark matter in TGD sense and interact rather weakly with ordinary matter so that it would take time for thermal equilibrium to establish itself.

    TGD indeed leads to the proposal that the formation of states regarded as far-from-thermal equilibrium states in standard physics approach means formation of flux tubes networks with heff/h=n larger than for the original state (see this and this). If this interpretation is correct, then one can also consider the possibility that the energy of the free state is higher than that of the dimer as assumed by the experimenters.

See the chapter Quantum criticality and dark matter or the article Mysteriously disappearing valence electrons of rare Earth metals and hierarchy of Planck constants.



Positron anomaly nine years later

The old PAMELA experiment and perhaps newer ones by Fermi-LAT and AMS-02 have discovered lots of positrons in the cosmic rays, whose flux is generally higher than expected. The energies of positrons show steady rise in the range [10,100] GeV and presumably the rise will continue. Such positrons may originate from dark matter and could amount to an "almost direct detection" of the particles that make up dark matter. There are also other interpretations.

1. Dark matter explanations for the positron excess

Consider first new physics explanations postulating dark matter.

  1. Dark spin 1 particles could decay to electron positron pair. The energy spectrum of energies is however discrete for dominating decay modes. For instance, vector mesons of new hadron physics could produce these events. Many neutral vector mesons say (Psi/J) were discovered in electron-positron annihilation.
  2. Pion-like spin 0 pseudoscalars decaying to electron-positron pairs and gamma rays predicts continuous spectrum. In the case of ordinary pion most decays are to gamma pairs. The decay to electron-positron pair and gamma ray has quite reasonable branching ratio .01. The reason is that the diagram describing this process is diagram for the decay to gamma pair with second gamma decaying to e+e- so that the rate is roughly the rate for the decay to gamma pair multiplied by &alpha_em≈ 1/137. This relation is expected to hold true for the decays of all pion-like states. For the decay to electron positron pair branching ratio about 6.5× 10-8. For pion-like states X the decay X→ e+e- + γ for pion-like state could give a continuous spectrum. The mass of X should be of order 100 GeV for this option.

2. Standard physics explanation for the positron excess

One of the standard physics explanations is that the positrons emerge from pulsars. The beams from pulsars contain electrons accelerated to very high energies in the gigantic magnetic field of pulsar. This beam collides with the matter surrounding the pulsar and both gamma rays and positrons are generated in these interactions.

The standard physics proposal has been put to a test. One can predict the intensity of gamma rays coming from pulsars using standard model physics and deduce from it the density of electrons needed to generate it. Both positrons and gamma rays would be created when electrons from the pulsar are accelerated to very high energies in enormous magnetic field of the pulsar and collide with surrounding matter. This is like particle accelerator. The energies of the produced gamma rays and also positrons extend to TeV range, which corresponds to the energy range for LHC. It turns out that the flux of electrons implied by the gamma ray intensity is too low to explain the flux of positrons detected by PAMELA and some other experiments: see the popular article and the research article "Extended gamma-ray sources around pulsars constrain the origin of the positron flux at Earth" in Science.

3. TGD based model for positron excess

Also TGD suggests an explanation for the positron excess (I learned about PAMELA experiment at my birth day and it was excellent birthday present!). TGD allows a hierarchy of scaled up copies of hadron physics labelled by ordinary Mersenne primes Mn= 2n-1 or by Gaussian Mersennes MG,n= (1+i)n-1 . Ordinary hadron physics would correspond to M107.

  1. M89 hadron physics would have mass scale which is 512 times higher than that for ordinary hadron physics: the size scale of these hadrons is by factor 1/512 shorter than that for ordinary hadrons (see this). There are indications for the copies also in other scales: M79 for instance. X boson provides indication for MG,113 pions in nuclear scale. Even copies of hadron physics in biologically important length scales labelled by Gaussian Mersennes MG,k, k= 151,157,163,167 could exist and play key role in living matter (see this). By the way, the appearance of four Gaussian Mersennes in this length scale range is number theoretical mircale.
  2. M89 hadrons can also appear as dark states with Planck constant heff=n×h. For n=512 they would have the size of ordinary hadrons. This could explain the strange anomalies observed at RHIC and later at LHC and hinting about the presence of string like structures in what was expected to be color deconfinement phase transition predicting thermal spectrum should have been observed instead of strong correlations suggesting for quantum criticality characterized by long range correlations and fluctuations for which heff/h=n would be an explanation.

  3. A large number of bumps, whose masses correspond to the masses of ordinary hadron physics scaled up by factor 512, have been reported at LHC. Unfortunately these bumps cannot be explained by SUSY and other main stream models so that they have been forgotten.
TGD based model could be combined with the pulsar model for the positron excess. The collisions between protons from the pulsar accelerated in its magnetic field and the matter surrounding the pulsar would be analogous to those taking place between proton beams at LHC. If the collision energy is high enough (as it seems since gamma rays up to TeV range have been observed) they could produce dark M89 mesons, in particular pions, which then decay to gamma rays and lepton pairs, in particular electron-positron pairs. Similar collisions could occur also in the atmosphere of Earth between ultrahigh energy cosmic rays and nuclei of atmosphere and be responsible for the exotic cosmic ray events like Centauro challenging standard model physics (see this).

4. Other evidence for dark pion like states

There is also other evidence for pion-like states dark in TGD sense.

  1. There is an old observation that gamma ray pairs with energy essential that of electrons rest must come from the center of Milky Way presumably resulting in decays of a particle with mass slightly larger than two times the mass of electron. These particles would also decay to electron positron pairs and the resulting electrons and positrons would be accelerated in the magnetic field of say pulsar to high energies. The rate for the decay to electron positron pairs is quite too slow as compared to the decay rate to gamma pairs. Therefore this mechanism cannot explain positron surplus.
  2. The TGD model for the pion-like states decaying to gamma pairs is as leptopion (see this), which would be a pion-like bound state of color excitations of electrons predicted to be possible in TGD Universe. "Electropion" like states were discovered experimentally in CERN already at seventies and later evidence also for the muopions and taupions has emerged but since they did not fit with standard model, their existence was forgotten. This has been the fate of many other anomalies in particle physics. In nuclear physics there are century old forgotten anomalies re-discovered several times only to be "forgotten" again. The laws of Nature are not discovered nowadays as in good old days: they are decided by the hegemony, which happens to be in power. SUSY, superstring models, and M-theory already disappearing in the sands of time are basic examples of this new political science.
  3. The reason for not accepting the existence of leptopion like states was that in standard model intermediate bosons should decay to them and their decay widths would be larger than their experimental values. However, if leptopions are dark matter in TGD sense having non-standard value of Planck constant heff/h=n, the problem can be circumvented.
See the chapter Recent status of leptopion hypothesis.



Mysteriously disappearing valence electrons of rare Earth metals and hierarchy of Planck constants

The evidence for the hierarchy of Planck constants heff/h=n labelling dark matter as phases with non-standard value of Planck constant is accumulating.

The latest piece of evidence comes from the well-known mystery (not to me until now!) related to rare Earth metals. Some valence electrons of these atoms mystically "disappear" when the atom is heated. This transition is knonw as Lifshitz transition. The popular article Where did those electrons go? Decades-old mystery solved claims that the mystery of disappearing valence electrons is finally resolved. The popular article is inspired by the article Lifshitz transition from valence fluctuations in YbAl3 by Chatterjee et al published in Nature Communications.

Dark matter and hierarchy of Planck constants

The mysterious disappearance of valence electrons brings in mind dark atoms with Planck constant heff=n×h. Dark matter corresponds in TGD Universe to a hierarchy with levels labelled by the value of heff. One prediction is that the binding energy of dark atom is proportional to 1/heff2 and thus behaves like 1/n2 and decreases with n.

n=1 is the first guess for ordinary atoms but just a guess. The claim of Randell Mills is that hydrogen has exotic ground states with larger binding energy. A closer examination suggests n=n0=6 for ordinary states of atoms. The exotic states would have n<6 and therefore higher binding energy scale (see this and this).

This leads to a model of biocatalysis in which reacting molecules contain dark hydrogen atoms with non-standard value of n larger than usual so that their binding energy is lower. When dark atom or electron becomes ordinary binding energy is liberated and can kick molecules over the potential wall otherwise preventing the reaction to occur. After that the energy is returned and the atom becomes dark again. Dark atoms would be catalytic switches. Metabolic energy feed would take care of creating the dark states. In fact, heff/h=n serves as a kind of intelligence quotient for a system in TGD inspired theory of consciousness.

Are the mysteriously dissappearing valence electrons in rare earth metals dark?

Could the heating of the rare earth atoms transform some valence electrons to dark electrons with heff/h=n larger than for ordinary atom? The natural guess is that thermal energy kicks the valence electron to a dark orbital with a smaller binding energy? The prediction is that there should be critical temperatures behaving like Tcr= T0(1- n20/n2). Also transitions between different dark states are possible. These transitions might be also induced by irradiating the atom with photons with the transition energy between different dark states having same quantum numbers.

ORMEs as one manner to end up with dark mattter in TGD sense

I ended up to the discovery of dark matter hierarchy and eventually to adelic physics, where heff/h=n has number theoretic interpretation along several roads starting from anomalous findings. One of these roads began from the claim about the existence of strange form of matter by David Hudson. Hudson associated with these strange materials several names: White Gold, monoatomic elements, and ORMEs (orbitally re-arranged metallic elements). Any colleague without suicidical tendencies would of course refuse to touch anything like White Gold even with a 10 meter long pole but I had nothing to lose anymore.

My question was how to explain these elements if they are actually real. If all valence electrons of this kind of element are dark these element have effectively full electron shells as far as ordinary electrons are considered and behave like noble gases with charge in short scales and do not form molecules. Therefore "monoatomic element" is justified. Of course, only the electrons in the outermost shell could be dark and in this case the element would behave chemically and also look like an atom with smaller atomic number Z. So called Rydberg atoms for which valence electrons are believed to reside at very large orbitals could be actually dark atoms in the proposed sense.

Obviously also ORME is an appropriate term since some valence electrons have re-arranged orbitally. White Gold would be Gold but with dark valence electron. The electron configuration of Gold is [Xe] 4f14 5d10 6s1. There is single unpaired electron with principal quantum number m=6 and this would be dark for White Gold and chemically like Platinum (Pt), which indeed has white color.

Biologically important ions as analogs of ORMEs

In TGD inspired biology the biologically important atoms H+, Li+, Na+, K+, Ca++, Mg++ are assumed to be dark in the proposed sense. But I have not specified darkness in precise sense. Could these ions have dark valence electrons with scaled up Compton length and forming macroscopic quantum phases. For instance, Cooper pairs could become possible and make possible high Tc superconductivity with members of Cooper pair at parallel flux tubes. The earlier proposal that dark hydrogen atoms make possible biocatalysis becomes more detailed: at higher evolutionary levels also the heavier dark atoms behaving like noble gases would become important in bio-catalysis. Interestingly, Rydberg atoms have been proposed to be important for biology and they could be actually dark atoms.

To sum up, if TGD view is correct , an entire spectroscopy of dark atoms and partially dark molecules is waiting to be discovered and irradiation by light with energies corresponding to excitation energies of dark states could be the manner to generate dark atomic matter. Huge progress in quantum biology could also take place. But are colleagues mature enough to check whether the TGD view is correct?

See the chapter Quantum criticality and dark matter.



Dark nuclear synthesis and stellar evolution

The temperature of the solar core is rather near to the scale of dark nuclear binding energy. This co-incidence inspires interesting questions about the dark nucleosynthesis in the stellar evolution.

1. Some questions inspired by a numerical co-incidence

The temperature at solar core is about T=1.5× 107 K corresponding to the thermal energy E= 3T/2= 2.25 keV obtained by a scaling factor 2-11 energy ∼ 5 MeV, which is the binding energy scale for the ordinary nuclei. That this temperature corresponds to the binding energy scale of dark nuclei might not be an accident.

That the temperature in the stellar core is of the same order of magnitude as dark nuclear binding energy is a highly intriguing finding and encourages to ask whether dark nuclear fusion could be the key step in the production of ordinary nuclei and what is the relation of dark nucleosynthesis to ordinary nucleosynthesis.

  1. Could dark nucleosynthesis occur also pre-stellar evolution and thus proceed differently from the usual p-p-cycle involving fusion processes? The resulting ordinary nuclei would undergo only ordinary nuclear reactions and decouple from the dark dynamics. This does not exclude the possibility that the resulting ordinary nuclei form nuclei of nuclei with dark protons: this seems to occur also in nuclear transmutations.
  2. There would be two competing effects. The higher the temperature, the less stable dark nuclei and the longer the dark nuclear strings. At lower temperatures dark nuclei are more stable but transform to ordinary nuclei decoupling from the dark dynamics. The liberated nuclear binding energy however raises the temperature and makes dark nuclei less stable so that the production of ordinary nuclei in this manner would slow down.

    At what stage ordinary nuclear reactions begin to dominate over dark nucleosynthesis? The conservative and plausible looking view is that p-p cycle is indeed at work in stellar cores and has replaced dark nucleosynthesis when dark nuclei became thermally unstable.

    The standard view is that solar temperature makes possible tunnelling through Coulomb wall and thus ordinary nuclear reactions. The temperature is few keVs and surprisingly small as compared to the height of Coulomb wall Ec∼ Z1Z2e2/L, L the size of the nucleus. There are good reasons to believe that this picture is correct. The co-incidence of the two temperatures would make possible the transition from dark nucleosynthesis to ordinary nucleosynthesis.

  3. What about dark nuclear reactions? Could they occur as reconnections of long magnetic flux tubes? For ordinary nuclei reconnections of short flux tubes would take place (recall the view about nuclei as two-sheeted structures). For ordinary nuclear the reactions at energies so low that the phase transition to dark phase (somewhat analogous to the de-confinement phase transition in QCD) is not energetically possible, the reactions would occur in nuclear scale.
  4. An interesting question is whether dark nucleosynthesis could provide a new manner to achieve ordinary nuclear fusion in laboratory. The system would heat itself to the temperatures required by ordinary nuclear fusion as it would do also during the pre-stellar evolution and when nuclear reactor is formed spontaneously (Oklo reactor).
2. Could dark nucleosynthesis affect the views about stellar evolution?

The presence of dark nucleosynthesis could modify the views about star formation, in particular about energy production in protostars and pre-main-sequence stars (PMS) following protostars in stellar evolution.

In protostars and PMSs the temperature is not yet high enough for the burning of hydrogen to 4He, and according to the standard model the energy radiated by the star consists of the gravitational energy liberated during the gravitational contraction. Could dark nucleosynthesis provide a new mechanism of energy production and could this energy be transferred from the protostar/PMS as dark energy along dark magnetic flux tubes?

Can one imagine any empirical evidence for the presence of dark nucleosynthesis in protostars and PMSs?

  1. The energy and matter produced in dark nucleosynthesis could partially leak out along dark magnetic flux tubes and give rise to astrophysical jets. Astrophysical jets indeed accompany protostars and the associated planetary and bipolar nebulae as well as PMSs (T Tauri stars and Herbig-Haro objects). The jets along flux tubes associated with hot spots at which dark nucleosynthesis would take place could provide also a mechanism for the transfer of angular momentum from the protostar/PMS.
  2. Spectroscopic observations of dense cores (protostar) not yet containing stars indicate that contraction occurs but the predicted expansion of the contracting region has not been observed (see this). The energy production by dark nucleosynthesis could increase pressure and slow down and even prevent the expansion of the contracting region.
How dark nucleosynthesis could affect the evolution of protostars and PMSs?
  1. In standard model the formation of accretion disk could be understood in terms of angular momentum conservation: spherical distribution of matter transforms to a planar one does not require large changes for the velocities tangential to the plane. The mechanism for how the matter from accretion disk spirals into star is however poorly understood.
  2. The TGD inspired model for galaxy formation suggests that the core region of the protostar is associated with a highly knotted cosmic string ("pearl in a necklace") forming the dark core of galaxy with constant density of dark matter (see this). The dark matter from the cosmic string would have leaked out from the cosmic string and transformed to ordinary matter already before the annihilation of quarks and antiquarks. The CP, P, and T asymmetries predicted by twistor lift of TGD would predict that there is a net quark (antiquark) number outside (inside) the cosmic string. The locally axisymmetric gravitational potential of the cosmic string would favour disk like rather than spherically symmetric matter distribution as the initial distribution of the baryonic matter formed in the hadronization from the quarks left from the annihilation.

    Quantitative model is needed to see whether dark fusion could contribute significantly to the energy production in protostars and PMSs and affect their evolution. The nuclear binding energy liberated in dark fusion would slow down the gravitational contraction and increase the duration of protostar and PMS phases. In standard model PMS phase is possible for masses varying from 2 to 8 solar masses. Dark nucleosynthesis could increase the upper bound for the mass of PMS from that predicted by the standard model.

See the chapter Cold fusion again or the article with the same title. See also the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?.



Summary of the model of dark nucleosynthesis

The books of Steven Krivit (see Hacking the atom, Fusion fiasco, and Lost history ) have been of enormous help in polishing the details of the model of dark nucleosynthesis explaining the mysterious aspects of what has been called cold fusion or LENR (Low energy nuclear reactions). Here

Summary of the model of dark nucleosynthesis model

Recall the basic ideas behind dark nucleosynthesis.

  1. Dark nuclei are produced as dark proton sequences at magnetic flux tubes with distance between dark protons with heff/h= 211 (approximately proton/electron mass ratio) very near to electron Compton length. This makes possible formation of at least light elements when dark nuclei transform to ordinary ones and liberate almost entire nuclear binding energy.
  2. Also more complex nuclei can form as nuclei of nuclei from ordinary nuclei and sequences of dark protons are at magnetic flux tubes. In particular, the basic rule (A,Z)→ (A+1,Z+1) of Widom-Larsen model is satisfied although dark beta decays would break this rule.

    In this case the transformation to ordinary nuclei produces heavier nuclei, even those heavier than Fe. This mechanism could make possible the production of heavy nuclei outside stellar interiors. Also dark beta decays can be considered. They would be fast: the idea is that the Compton length of weak bosons is scaled up and within the region of size scale of Compton length weak interactions have essentially the same strength as electromagnetic interactions so that weak decays are fast and led to dark isotopes stable against weak interactions.

  3. The transformation of dark nuclei to ordinary nuclei liberates almost all nuclear binding energy. This energy could induce the fission of the daughter nucleus and emission of neurons causing the decay of ordinary nuclei, at least those heavier than Fe.
  4. Also the dark weak process e-+p→ n+ν liberating energy of order electron mass could kick out neutron from dark nucleus. This process would be TGD counterpart for the corresponding process in WL but having very different physical interpretation. This mechanism could explain production of neutrons which is by about 8 orders slower than in cold fusion model.
  5. The magnetic flux tubes containing dark nuclei form a positively charged system attracted by negatively charged surfaces. The cathode is where the electrons usually flow to. The electrons can generate negative surface charge, which attracts the flux tubes so that flux tubes end up to the cathode surface and dark ions can enter to the surface. Also ordinary nuclei from the cathode could enter temporarily to the flux tube so that more complex dark nuclei consisting of dark protons and nuclei are formed. Dark nuclei can also leak out of the system if the flux tube ends to some negatively charged surface other than cathode.
The findings described in the the books of Krivit, in particular the production of neutrons and tritium, allow to sharpen the view about dark nucleosynthesis.
  1. The simplest view about dark nucleosynthesis is as a formation of dark proton sequences in which some dark protons transform by beta decay (emission of positron) to neutrons. The objection is that this decay is kinematically forbidden if the masses of dark proton and neutron are same as those of ordinary proton and neutron (n-p mass difference is 1.3 MeV). Only dark proton sequences would be stable.

    Situation changes if also n-p mass difference scales by factor 2-11. The spectra of dark and ordinary nuclei would be essentially identical. For scaled down n-p mass difference, neutrons would be produced most naturally in the process e-+p→ n+ν for dark nuclei proceeding via dark weak interactions. The dark neutron would receive a large recoil energy about me≈ .5 MeV and dark nucleus would decay. The electrons inducing the neutron emission could come from the negatively charged surface of cathode after the flux tube has attached to it. The rate for e-+p→ n+ν is very law for ordinary weak Planck constant. The ratio n/T ∼ 10-8 allows to deduce information about heff/h: a good guess is that dark weak process is in question.

  2. Tritium and other isotopes would be produced as several magnetic flux tubes connect to a negatively charged hot spot of cathode. A reasonable assumption is that the ordinary binding energy gives rise to an excited state of the ordinary nucleus. This can induce the fission of the final state nucleus and also neutrons can be produced. Also scaled down variants of pions can be emitted, in particular the pion with mass of 17 MeV (see this)
  3. The ordinary nuclear binding energy minus the n-p mass difference 1.3 MeV multiplied by the number of neutrons would be released in the transformation of dark nuclei to ordinary ones. The table below gives the total binding energies and liberated energies for some lightest stable nuclei.

    The ordinary nuclear binding energies EB for light nuclei and the energies Δ E liberated in dark → ordinary transition.
    Element 4He 3H T D
    EB/MeV 28.28 7.72 8.48 2.57
    Δ E/MeV 25.70 6.41 5.8 1.27

    Gamma rays are not wanted in the final state. For instance, for the transformation of dark 4He to ordinary one, the liberated energy would be about 25.7 MeV. If the final state nucleus is in excited state unstable against fission, the binding energy can go to the kinetic energy of the final state and not gamma ray pairs are observed. If two 17 MeV pions π113 are emitted the other one or both must be on mass shell and decay weakly. The decay of off-mass π113 could however proceed via dark weak interactions and be fast so that the rate for this process could be considerably faster than for the emission of two gamma rays.

The relationship of dark nucleosynthesis to ordinary nucleosynthesis

One can raise interesting questions about the relation of dark nucleosynthesis to ordinary nucleosynthesis.

  1. The temperature at solar core is about 1.5× 107 K corresponding to energy about 2.25 keV. This temperature is obtained by scaling factor 2-11 from 5 MeV which is binding energy scale for ordinary nuclei. That this temperature corresponds to the binding energy scale of dark nuclei might not be an accident.

    That the temperature in the stellar core is of the same order of magnitude as dark nuclear binding energy is a highly intriguing finding and encourages to ask whether dark nuclear fusion could be the key step in the production of ordinary nuclei.

    Could dark nucleosynthesis in this sense occur also pre-stellar evolution and thus proceed differently from the usual p-p-cycle involving fusion processes? The resulting ordinary nuclei would undergo only ordinary nuclear reactions and decouple from the dark dynamics. This does not exclude the possibility that the resulting ordinary nuclei form nuclei of nuclei with dark protons: this seems to occur also in nuclear transmutations.

  2. There would be two competing effects. The higher the temperature, the less stable dark nuclei and the longer the dark nuclear strings. At lower temperatures dark nuclei are more stable but transform to ordinary nuclei decoupling from the dark dynamics. The liberated nuclear binding energy however raises the temperature and makes dark nuclei less stable so that the production of ordinary nuclei in this manner would slow down.

    At what stage ordinary nuclear reactions begin to dominate over dark nucleosynthesis? The conservative and plausible looking view is that p-p cycle is indeed at work in stellar cores and has replaced dark nucleosynthesis when dark nuclei became thermally unstable.

    The standard view is that solar temperature makes possible tunnelling through Coulomb wall and thus ordinary nuclear reactions. The temperature is few keVs and surprisingly small as compared to the height of Coulomb wall Ec∼ Z1Z2e2/L, L the size of the nucleus. There are good reasons to believe that this picture is correct. The co-incidence of the two temperatures would make possible the transition from dark nucleosynthesis to ordinary nucleosynthesis.

  3. What about dark nuclear reactions? Could they occur as reconnections of long magnetic flux tubes? For ordinary nuclei reconnections of short flux tubes would take place (recall the view about nuclei as two-sheeted structures). For ordinary nuclear the reactions at energies so low that the phase transition to dark phase (somewhat analogous to the de-confinement phase transition in QCD) is not energetically possible, the reactions would occur in nuclear scale.
  4. An interesting question is whether dark nucleosynthesis could provide a new manner to achieve ordinary nuclear fusion in laboratory. The system would heat itself to the temperatures required by ordinary nuclear fusion as it would do also during the pre-stellar evolution and when nuclear reactor is formed spontaneously (see Oklo reactor.
This is only rough overall view and it would be unrealistic to regard it as final one: one can indeed imagine variations. But even its recent rough form it seems to be able explain all the weird looking aspects of CF/LENR/dark nucleosynthesis. To pick up one particular interesting question: how significantly dark nucleosynthesis could contribute to the generation of elements heavier than Fe (and also lighter elements)? It is assumed that the heavier elements are generated in so called r-process involving creation of neutrons fusing with nuclei. One option is that r-process accompanies supernova explosions but SN1987A did not provide support for this hypothesis: the characteristic em radiation accompanying r-process was not detected. Quite recently the observation of gravitational waves from the fusion of two neutron stars generated also visible radiation, so called kilonova (see this), and the radiation accompanying r-process was reported. Therefore this kind of collisions generate at least part of the heavier elements.

See the chapter Cold fusion again or the article with the same title.



The lost history from TGD perspective

The third volume in " Explorations in Nuclear Research" is about lost history (see this): roughly the period 1910-1930 during which there was not yet any sharp distinction between chemistry and nuclear physics. After 1930 the experimentation became active using radioactive sources and particle accelerators making possible nuclear reactions. The lost history suggests that the methods used determine to unexpected degree what findings are accepted as real. After 1940 the hot fusion as possible manner to liberate nuclear energy became a topic of study but we are still waiting the commercial applications.

One can say that the findings about nuclear transmutations during period 1912-1927 became lost history although most of these findings were published in highly respected journals and received also media attention. Interested reader can find in the book detailed stories about persons involved. This allows also to peek to the kitchen side of science and to realize that the written history can contain surprising misidentifications of the milestones in the history of science. Author discusses in detail an example about this: Rutherford is generally regarded as tje discover of the first nuclear transmutation but even Rutherford himself did not make this claim.

It is interesting to look what the vision about the anomalous nuclear effects based on dark nucleosynthesis can say about the lost history and whether these findings can provide new information to tighten up the TGD based model, which is only qualitative. Therefore I go through the list given in the beginning of book from the perspective of dark nucleosynthesis.

Before continuing it is good to recall the first the basic ideas behind dark nucleosynthesis.

  1. Dark nuclei are produced as dark proton sequences at magnetic flux tubes with distance between dark protons with heff= 211 (approximately proton/electron mass ratio) very near to electron Compton length. This makes possible formation of at least light elements when dark nuclei transform to ordinary ones and liberate almost entire nuclear binding energy.
  2. Also more complex nuclei can from in which ordinary nuclei and sequences of dark protons are at magnetic flux tubes. In particular, the basic rule (A,Z)→ (A+1,Z+1) of Widom-Larsen model is satisfied although dark beta decays would break this rule.

    In this case the transformation to ordinary nuclei produces heavier nuclei, even those heavier than Fe. This mechanism could actually make possible production of heavy nuclei outsider stellar interiors. Also dark beta decays can be considered. They would be fast: the idea is that the Compton length of weak bosons is scaled up and within the region of size scale of Compton length weak interactions have essentially the same strength as electromagnetic interactions so that weak decays are fast and led to dark isotopes stable against weak interactions.

  3. The transformation of dark nuclei to ordinary nuclei liberates almost all nuclear binding energy. The transformation liberates large nuclear energy, which could lead to a decay of the daughter nucleus and emission of neurons causing e the decay of ordinary nuclei, at least those heavier than Fe.

    Remark: Interestingly, the dark binding energy is of order few keV and happens to be of the same order of magnitude as the thermal energy of nuclei in the interior of Sun. Could dark nuclear physics play some role in the nuclear fusion in solar core?

  4. The magnetic flux tubes containing dark nuclei form a positively charged system attracted by negatively charged surfaces. The cathode is where the electrons usually flow to. The electrons can generate negative surface charge, which attracts the flux tubes so that flux tubes end up to the cathode surface and dark ions can enter to the surface. Also ordinary nuclei from the cathode could enter temporarily to the flux tube so that more complex dark nuclei consisting of dark protons and nuclei are formed. Dark nuclei can also leak out of the system if the flux tube ends to some negatively charged surface other than cathode.
Production of noble gases and tritium

During period 1912-1914 several independent scientists discovered the production of noble gases 4He, neon (Ne), and argon (Ar) using high voltage electrical discharges in vacuum or r through hydrogen gas at low pressures in cathode-ray tubes. Also an unidentified element with mass number 3 was discovered. It was later identified as tritium. Two of the researchers were Nobel laureates. 1922 two researchers in University of Chicago reported production of 4He. Sir Joseph John Thomson explained the production of 4He using occlusion hypothesis. In understand occlusion as a contamination of 4He to the tungsten wire. The question is why not also hydrogen.

Why noble gases would have been produced? It is known that noble gases tend to stay near surfaces. In one experiment it was found that 4He production stopped after few days, maybe kind of saturation was achieved. This suggests that isotopes with relatively high mass numbers were produced from dark proton sequences (possibly containing also neutrons resulting in the dark weak decays). The resulting noble gases were caught near the electrodes and therefore only their production was observed.

Production of 4He in experiments of Wendle and Irion

In 1222 Wendle and Irion published results from the study of exploding current wires. Their arrangement involved high voltage of about 3× 104 V and di-electric breakdown through air gap between the electrodes producing sudden current peak in a current wire made of tungsten (W with (Z,A)=(74,186) for the most abundant isotope) at temperature about T=2× 104 C, which corresponds to a thermal energy 3kT/2 of about 3 eV. Production of 4He was detected.

Remark: The temperature at solar core is about 1.5× 107 K corresponding to energy about 2.25 keV and 3 orders of magnitude higher than the temperature used. This temperature is obtained by scaling factor 2-11 from 5 MeV which is binding energy scale for ordinary nuclei. That this temperature corresponds to the binding energy scale of dark nuclei might not be an accident.

The interpretation of the experimentalists was that the observed 4He was from the decay of tungsten in the hot temperature making it unstable. This explanation is of course not consistent with what we known at about nuclear physics. No error in the experimental procedure was found. Three trials to replicate the experiment of Wendle and Irion were made with a negative result. The book discusses these attempts in detail and demonstrates that they were not faithful to the original experimental arrangement.

Rutherford explained the production of 4He in terms of 4He occlusion hypothesis of Thomson. In the explosion the 4He contaminate would have liberated. But why just helium contamination, why not hydrogen? By above argument one could argue that 4He as noble gas could indeed form stable contaminates.

80 yeas later Urutskoev repeated the experiment with exploding wires and observed besides 4He also other isotopes. The experiments of Urutskoev demonstrated that there are 4 peaks for the production rate of elements as function of atomic number Z. Furthermore, the amount of mass assignable to the transmuted elements is nearly the mass lost from the cathode. Hence also cathode nuclei should end up to flux tubes.

How dark nucleosynthesis could explain the findings? The simplest model relies on a modification of the occlusion hypothesis: a hydrogen contaminate was present and the formation of dark nuclei from the protons of hydrogen at flux tubes took place in the exploding wire. The nuclei of noble gases tended to remain in the system and 4He was observed.

Production of Au and Pt in arc discharges in Mercury vapor

In 1924 German chemist Miethe, better known as the discoverer of 3-color photography found trace amount of Gold (Au) and possibly Platinum (Pt) in Mercury (Hg) vapor photography lamp. Scientists in Amsterdam repeated the experiment but using lead (Pb) instead of Hg and observed production of Hg and Thallium (Tl). The same year a prominent Japanese scientist Nagaoka reported production of Au and something having the appearance of Pt. Nagaoka used a an electric arc discharge between tungsten (W) electrodes bathed in dielectric liquid "laced" with liquid Hg.

The nuclear charges and atomic weights for isotopes involved are given in the table below.

The nuclear charge and mass number (Z,A) for the most abundant isotopes of W, Pt, Au,Hg, Tl and Pb.

Element W Pt Au Hg Tl Pb
(Z,A) (74,186) (78,195) (79,197) (80,202) (81,205) (82,208)

Could dark nucleosynthesis explain the observations? Two mechanisms for producing heavier nuclei relying one the formation of dark nuclei from the nuclei of the electrode metal and dark protons and their subsequent transformation to ordinary nuclei.

  1. Dark nuclei are formed from the metal associated with cathode and dark protons. In Nagaoka's experiment this metal is W with (Z,A)=(74,186). Assuming that also dark beta decays are possible this would lead to the generation of heavier beta stable elements Au with (Z,A)= (79,197) or their stable isotopes. Unfortunately, I could not find what the electrode metal used in the experiments of Miethe was.
  2. In the experiments of Miethe the nuclei of Hg transmuted to Au ((80,202)→ (79,197)) and to Pt ((80,202)→ (78,195)). In Amsterdam experiment of Pb transmuted to Hg ((82,208) → (80,202)) and Tl ((82,208) → (81,205)). This suggests that the nuclei resulted in the decay of Hg (Pb) induced by the nuclear binding energy liberated in the transformation of dark nuclei formed from the nuclei of cathode metal and dark protons to ordinary nuclei. Part of the liberated binding energy could have induced the fission of the dark nuclei. The decay of dark nuclei could have also liberated neutrons absorbed by the Hg (Pb) nuclei and inducing the decay to lighter nuclei. Thus also the analog of r-process could have been present.
Paneth and Peters' H→ 4He transmutation

In 1926 German chemists Paneth and Peters pumped hydrogen gas into a chamber with finely divided palladium powder and reported the transmutation of hydrogen to helium. This experiment resembles the "cold fusion" experiment of Pons and Fleischman in 1989. The explanation would be the formation of dark 4He nuclei consisting of dark protons and transformation to ordinary 4He nuclei.

See the chapter Cold fusion again or the article with the same title. See also the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?.



What is the IQ of neutron star?

" Humans and Supernova-Born Neutron Stars Have Similar Structures, Discover Scientists" is the title of a popular article about the finding that neutron stars and eukaryotic (not only human) cells contain geometrically similar structures. In cells the cytoplasma between cell nucleus and cell membrane contains a complex highly folded membrane structure known as endoplasmic reticulum (ER). ER in turn contains stacks of evenly spaced sheets connected by helical ramps. They resemble multistory parking garages (see the illustration of the popular article). These structures are referred to as parking places for ribosomes, which are the machinery for the translation of mRNA to amino-acids. The size scale of these structures must be in the range 1-100 microns.

Computer simulations for neutron stars predict geometrically similar structures, whose size is however million times larger and therefore must be in the range of 1-100 meters.The soft condensed-matter physicist Greg Huber from U.C. Santa Barbara and nuclear physicist Charles Horowitz from Indiana University have worked together to explore the shapes (see this and this).

The physical principles leading to these structures look quite different. At nuclear physics side one has strong and electromagnetic interaction at microscopic level and in the model used they give rise to these geometric structures in macroscopic scales. In living matter the model assumes basically entropic forces and the basic variational principle is minimization of the free energy of the system - second law of thermodynamics for a system coupled to thermal bath at constant temperature. The proposal is that some deeper principle might be behind these intriguing structural similarities.

In TGD framework one is forced to challenge the basic principles behind these models as really fundamental principles and to consider deeper reasons for the geometric similarity. One ends up challenging even the belief that neutron stars are just dead matter.

  1. In TGD framework space-time identified as 4-D surface in H=M4× CP2 is many-sheeted fractal structure. In TGD these structures are topological structures for the space-time itself as a 4-surface rather than for the distribution of the matter in topologically trivial almost empty Minkowski space.

    TGD space-time is also fractal characterized by the hierarchy of p-adic length scales assignable to primes near powers of two and to a hierarchy of Planck constants. Zero energy ontology (ZEO) predicts also a hierarchy of causal diamonds (CDs) as regions inside which space-time surfaces are located.

    The usual length scale reductionism is replaced with fractality and the fractality of the many-sheeted space-time could explain the structural similarity of structures with widely different size scales.

  2. Dark matter is identified as a hierarchy of phases of ordinary matter labelled by the value heff=n× h of Planck constant. In adelic physics heff/h=n has purely number theoretic interpretation as a measure for the complexity of extension of rationals - the hierarchy of dark matters would correspond to the hierarchy of these extensions and evolution corresponds to the increase of this complexity. It would be dark matter at the flux tubes of the magnetic body of the system that would make the system living and intelligent. This would be true for all systems, not only for those that we regard as living systems. Perhaps even neutron stars!
  3. In adelic physics (see this p-adic physics for various primes as physics of cognition and ordinary real number based physics are fused together. One has a hierarchy of adeles defined by extensions of rational numbers (not only algebraic extensions but by those using roots of e). The higher the complexity of the extension, the larger the number of common points shared by reals and p-adics: they correspond to space-time points with coordinates in an extension of rationals defining the adele. These common points are identified as cognitive representations, something in the intersection of cognitive and sensory. The larger the number of points, the more complex the cognitive representations. Adeles define thus an evolutionary hierarchy.

    The points of space-time surface defining the cognitive representation are excellent candidates for the carriers of fundamental fermions since many-fermion states allow interpretation in terms of a realization of Boolean algebra. If so then the complexity of the cognitive representation characterized by heff/h increases with the density of fundamental fermions! The larger the density of matter, the higher the intelligence of the system if this view is correct!

This view inspires interesting speculative questions.
  1. In TGD inspired theory of consciousness conscious entities form a fractal hierarchy accompanying geometric fractal hierarchies. Could the analogies between neutron stars and cells be much deeper than merely geometric? Could neutron stars be super-intelligent systems possessing structures resembling those inside cells? What about TGD counterparts of black holes? For blackhole like structures the fermionic cognitive representation would contain even more information per volume than those for neutron star. Could blackholes be super-intelligences instead of mere cosmic trashbins?

    Living systems metabolize. The interpretation is that the metabolic energy allows to increase the value of heff/h and generate negentropic entanglement crucial for cognition. Also blackholes "eat matter from their environment: is the reason the same as in the case of living cell?

    Living systems communicate using flux tubes connecting them and serving also as correlates of attention. In TGD frame flux tubes emanates from all physical systems, in particular stars and blackholes and mediate gravitational interactions. In fact, flux tubes replace wormholes in ER-EPR correspondence in TGD framework or more precisely: wormhole contacts replace flux tubes in GRT framework.

  2. Could also blackhole like structures possess the analog of endoplasmic reticulum replacing the cell membrane with an entire network of membranes in the interior of cell? Interpretation as minimal surface is very natural in TGD framework. Could the predicted space-time sheet within blackhole like structure having Euclidian signature of the induced metric serve as the analog for cell nucleus? In fact, all systems - even elementary particles - possess the space-time sheet with Euclidian signature: this sheet is analogous to the line of Feynman diagram. Could the space-time sheet assignable to cell nucleus have Euclidian signature of the induced metric? Could cell membrane be analogous to blackhole horizon?
  3. What abut genetic code? In TGD inspired biology genetic code could be realized already at the level of dark nuclear physics in terms of strings of dark protons: also ordinary nuclei are identified as strings of nucleons. Biochemical representation would be only a secondary representation and biochemistry would be a kind of shadow for the deeper dynamics of dark matter and magnetic flux tubes. Dark 3-proton states correspond naturally to DNA, RNA, tRNA and amino-acids and dark nuclei as polymers of these states (see this).

    Could neutron stars containing dark matter as dark nuclei indeed realize genetic code? This view about dark matter leads also to a proposal that the so called cold fusion could actually correspond to dark nucleosynthesis such that the resulting dark nuclei with rather small nuclear binding energy transform to ordinary nuclei and liberate most of the ordinary nuclear binding energy in this process (see this). Could dark nucleosynthesis produce elements heavier than Fe and also part of the lighter elements outside stellar interiors. Could this happen also in the fusion of neutron stars to neutron star like entity as the recent simultaneous detection of gravitational waves (GW170817 event) and em radiation from this kind of fusion suggests (see this).

  4. How can one understand cell (or any system) as a trashbin like structure maximizing its entropy on one hand and as an intelligent system on one hand? This can make sense in TGD framework where the amount of conscious information, negentropy, is measured by the sum of p-adic variants of entanglement entropies and is negative(!) thanks to the properties of p-adic norm. Neutron stars, blackholes and cells would be entropic objects if one limits the consideration to real sector of adeles but in p-adic sectors they would carry conscious information. The sum of real and p-adic entropies tends to be negative. Living cell would be very entropic object in real sense but very negentropic in p-adic sense: even more, the sum of negative p-adic negentropies associated with cognition in adelic physics would overcome this entropy (see this).
See the chapter Cold fusion again or the article with the same title. See also the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?.



More about dark nucleosynthesis

In the sequel a more detailed view about dark nucleosynthesis is developed using the information provided by the first book of Krivit. This information allows to make also the nuclear string model much more detailed and connect CF/LENR with co called X boson anomaly and other nuclear anomalies.

1. Not only sequences of dark protons but also of dark nucleons are involved

Are only dark protons sequences at magnetic flux tubes involved or can these sequences consists of nuclei so that one would have nucleus consisting of nuclei? From the first book I learned, that the experiments of Urutskoev demonstrate that there are 4 peaks for the production rate of elements as function of atomic number Z. Furthermore, the amount of mass assignable to the transmuted elements is nearly the mass lost from the cathode. Hence also cathode nuclei should end up to flux tubes.

  1. Entire target nuclei can become dark in the sense described and end up to the same magnetic flux tubes as the protons coming from bubbles of electrolyte, and participate in dark nuclear reactions with the incoming dark nuclei: the dark nuclear energy scale would be much smaller than MeV. For heavy water electrolyte D must become dark nucleus: the distance between p and n inside D would be usual. A natural expectation is that the flux tubes connect the EZs and cathode.

    In the transformation to ordinary nuclear matter these nuclei of nuclei would fuse to ordinary nuclei and liberate nuclear energy associated with the formation of ordinary nuclear bonds.

  2. The transformation of protons to neutrons in strong electric fields observed already by Sternglass in 1951 could be understood as a formation of flux tubes containing dark nuclei and producing neutrons in their decays to ordinary nuclei. The needed voltages are in kV range suggesting that the scale of dark nuclear binding energy is of order keV implying heff/h=n∼ 211 - roughly the ratio mp/me.
  3. Remarkably, also in ordinary nuclei the flux tubes connecting nucleons to nuclear string would be long, much longer than the nucleon Compton length (see this and this). By ordinary Uncertainty Principle (heff=h) the length of flux tube to which binding energy is assigned would correspond to the size of nuclear binding energy scale of order few MeV. This would be also the distance between dark heff=n× h nuclei forming dark nuclear string! The binding energy would be scaled down by 1/n.

    This suggests that n→ 1 phase transition does not affect the lengths of flux tubes but only turns them to loops and that the distance between nucleons as measured in M4× CP2 is therefore scaled down by 1/n. Coulomb repulsion between proton does not prevent this if the electric flux between protons is channelled along the long flux tubes rather than along larger space-time sheet so that the repulsive Coulomb interaction energy is not affected in the phase transition! This line of thought obviously involves the notion of space-time as a 4-surface in crucial manner.

  4. Dark nuclei could have also ordinary nuclei as building bricks in accordance with fractality of TGD. Nuclei at dark flux tubes would be ordinary and the flux tubes portions - bonds - between them would have large heff and ahve thus length considerably longer than in ordinary nuclei. This would give sequences of ordinary nuclei with dark binding energy: similar situation is actually assumed to hold true for the nucleons of ordinary nuclei connected by analogs of dark mesons with masses in MeV range (see this).
Remark: In TGD inspired model for quantum biology dark variants of biologically important ions are assumed to be present. Dark proton sequences having basic entangled unit consisting of 3 protons analogous to DNA triplet would represent analogs of DNA, RNA, amino-acids and tRNA (see this). Genetic code would be realized already at the level of dark nuclear physics and bio-chemical realization would represent kind of shadow dynamics. The number of dark codons coding for given dark amino-acid would be same as in vertebrate genetic code.

2. How dark nuclei are transformed to ordinary nuclei?

What happens in the transformation of dark nuclei to ordinary ones? Nuclear binding energy is liberated but how does this occur? If gamma rays generated, one should invent also now a mechanism transforming gamma rays to thermal radiation. The findings of Holmlid provide valuable information here and lead to a detailed qualitative view about process and also allow to sharpen the model for ordinary nuclei.

  1. Holmlid (see this and this) has reported rather strange finding that muons (mass 106 MeV) pions (mass 140 MeV) and even kaons (mass 497) MeV are emitted in the process. This does not fit at all to ordinary nuclear physics with natural binding energy scale of few MeVs. It could be that a considerable part of energy is liberated as mesons decaying to lepton pairs (pions also to gamma pairs) but with energies much above the upper bound of about 7 MeV for the range of energies missing from the detected gamma ray spectrum (this is discussed in the first part of the book of Krivit). As if hadronic interactions would enter the game somehow! Even condensed matter physics and nuclear physics in the same coffee table are too much for mainstream physicist!
  2. What happens when the liberated total binding energy is below pion mass? There is experimental evidence for what is called X boson (see this) discussed from TGD point of view here. In TGD framework X is identified as a scaled down variant π(113) of ordinary pion π=π(107). X is predicted to have mass of m(π(113))= 2(113-107)/2m(π)≈ 16.68 MeV, which conforms with the mass estimate for X boson. Note that k=113 resp. k=117 corresponds to nuclear resp. hadronic p-adic length scale. For low mass transmutations the binding energy could be liberated by emission of X bosons and gamma rays.
  3. I have also proposed that pion and also other neutral pseudo-scalar states could have p-adically scaled variants with masses differing by powers of two. For pion the scaled variants would have masses 8.5 MeV, m(π(113))= 17 MeV, 34 MeV, 68 MeV, m(π(107))= 136 MeV, ... and also these could be emitted and decay to lepton pairs of gamma pairs (see this). The emission of scaled pions could be faster process than emission of gamma rays and allow to emit the binding energy with minimum number of gamma rays.
There is indeed evidence for pion like states (for TGD inspired comments (see this).
  1. The experimental claim of Tatischeff and Tomasi-Gustafsson is that pion is accompanied by pion like states organized on Regge trajectory and having mass 60, 80, 100, 140, 181, 198, 215, 227.5, and 235 MeV.
  2. A further piece of evidence for scaled variants of pion comes from two articles by Eef van Beveren and George Rupp. The first article is titled First indications of the existence of a 38 MeV light scalar boson. Second article has title Material evidence of a 38 MeV boson.
The above picture suggests that the pieces of dark nuclear string connecting the nucleons are looped and nucleons collapse to a nucleus sized region. On the other, the emission of mesons suggests that these pieces contract to much shorter pieces with length of order Compton length of meson responsible for binding and the binding energy is emitted as single quantum or very few quanta. Strings cannot however retain their length (albeit becoming looped with ends very near in M4× CP2) and contract at the same time! How could one unify these two conflicting pictures?
  1. To see how TGD could solve the puzzle, consider what elementary particles look like in TGD Universe (see this). Elementary particles are identified as two-sheeted structures consisting of two space-time sheets with Minkowskian signature of the induced metric connected by CP2 sized wormhole contacts with Euclidian signature of induced metric. One has a pair of wormhole contacts and both of them have two throats analogous to blackhole horizons serving as carriers of elementary particle quantum numbers.

    Wormhole throats correspond to homologically trivial 2-surfaces of CP2 being therefore Kähler magnetically charged monopole like entities. Wormhole throat at given space-time sheet is necessarily connected by a monopole flux tube to another throat, now the throat of second wormhole contact. Flux tubes must be closed and therefore consist of 2 "long" pieces connecting wormhole throats at different parallel space-time sheets plus 2 wormhole contacts of CP2 size scale connecting these pieces at their ends. The structure resembles extremely flattened rectangle.

  2. The alert reader can guess the solution of the puzzle now. The looped string corresponds to string portion at the non-contracted space-time sheet and contracted string to that at contracted space-time sheet! The first sheet could have ordinary value of Planck constant but larger p-adic length scale of order electron's p-adic length scale L(127) (it could correspond to the magnetic body of ordinary nucleon (see this)) and second sheet could correspond to heff=n× h dark variant of nuclear space-time sheet with n=2111 so that the size scales are same.

    The phase transition heff→ h occurs only for the flux tubes of the second space-time sheet reducing the size of this space-time sheet to that of nuclear k=137 space-time sheet of size of ∼ 10-14 meters. The portions of the flux tubes at this space-time sheet become short, at most of the order of nuclear size scale, which roughly corresponds to pion Compton length. The contraction is accompanied by the emission of the ordinary nuclear binding energy as pions, their scaled variants, and even heavier mesons. This if the mass of the dark nucleus is large enough to guarantee that total binding energy makes the emission possible. The second space-time sheet retains its size but the flux tubes at it retain their length but become loopy since their ends must follow the ends of the shortened flux tubes.

  3. If this picture is correct, most of the energy produced in the process could be lost as mesons, possibly also their scaled variants. One should have some manner to prevent the leakage of this energy from the system in order to make the process effective energy producer.
This is only rough overall view and it would be unrealistic to regard it as final: one can indeed imagine variations. But even its recent rough form it seems to be able explain all the weird looking aspects of CF/LENR/dark nucleosynthesis.

See the chapter Cold fusion again or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?



Comparison of Widom-Larsen model with TGD inspired models of CF/LENR or whatever it is

I cannot avoid the temptation to compare WL to my own dilettante models for which also WL has served as an inspiration. I have two models explaining these phenomena in my own TGD Universe. Both models rely on the hierarchy of Planck constants heff=n× h (see this and this ) explaining dark matter as ordinary matter in heff=n× h phases emerging at quantum criticality. heff implies scaled up Compton lengths and other quantal lengths making possible quantum coherence is longer scales than usually.

The hierarchy of Planck constants heff=n× h has now rather strong theoretical basis and reduces to number theory (see this). Quantum criticality would be essential for the phenomenon and could explain the critical doping fraction for cathode by D nuclei. Quantum criticality could help to explain the difficulties to replicate the effect.

1. Simple modification of WL does not work

The first model is a modification of WL and relies on dark variant of weak interactions. In this case LENR would be appropriate term.

  1. Concerning the rate of the weak process e+p→ n+ν the situation changes if heff is large enough and rather large values are indeed predicted. heff could be large also for weak gauge bosons in the situation considered. Below their Compton length weak bosons are effectively massless and this scale would scale up by factor n=heff/h to almost atomic scale. This would make weak interactions as strong as electromagnetic interactions and long ranged below the Compton length and the transformation of proton to neutron would be a fast process. After that a nuclear reaction sequence initiated by neutron would take place as in WL. There is no need to assume that neutrons are ultraslow but electron mass remains the problem. Note that also proton mass could be higher than normal perhaps due to Coulomb interactions.
  2. As such this model does not solve the problem related to the too small electron mass. Nor does it solve the problem posed by gamma ray production.

2. Dark nucleosynthesis

Also second TGD inspired model involves the heff hierarchy. Now LENR is not an appropriate term: the most interesting things would occur at the level of dark nuclear physics, which is now a key part of TGD inspired quantum biology.

  1. One piece of inspiration comes from the exclusion ones (EZs) of Pollack (see this) (see this and this), which are negatively charged regions (see this, this, and this).

    Also the work of the group of Prof. Holmlid (see this and this) not yet included in the book of Krivit was of great help. TGD proposal (see this and this) is that protons causing the ionization go to magnetic flux tubes having interpretation in terms of space-time topology in TGD Universe. At flux tubes they have heff=n× h and form dark variants of nuclear strings, which are basic structures also for ordinary nuclei but would have almost atomic size scale now.

  2. The sequences of dark protons at flux tubes would give rise to dark counterparts of ordinary nuclei proposed to be also nuclear strings but with dark nuclear binding energy, whose scale is measured using as natural unit MeV/n, n=heff/h, rather than MeV. The most plausible interpretation is that the field body/magnetic body of the nucleus has heff= n× h and is scaled up in size. n=211 is favoured by the fact that from Holmlid's experiments the distance between dark protons should be about electron Compton length.

    Besides protons also deuterons and even heavier nuclei can end up to the magnetic flux tubes. They would however preserve their size and only the distances between them would be scaled to about electron Compton length on basis of the data provided by Holmlid's experiments (see this and this).

    The reduced binding energy scale could solve the problems caused by the absence of gamma rays: instead of gamma rays one would have much less energetic photons, say X rays assignable to n=211 ≈ mp/me. For infrared radiation the energy of photons would be about 1 eV and nuclear energy scale would be reduced by a factor about 10-6-10-7: one cannot exclude this option either. In fact, several options can be imagined since entire spectrum of heff is predicted. This prediction is a testable.

    Large heff would also induce quantum coherence is a scale between electron Compton length and atomic size scale.

  3. The simplest possibility is that the protons are just added to the growing nuclear string. In each addition one has (A,Z)→ (A+1,Z+1) . This is exactly what happens in the mechanism proposed by Widom and Larsen for the simplest reaction sequences already explaining reasonably well the spectrum of end products.

    In WL the addition of a proton is a four-step process. First e+p→ n+ν occurs at the surface of the cathode. This requires large electron mass renormalization and fine tuning of the electron mass to be very nearly equal but higher than n-p mass difference.

    There is no need for these questionable assumptions of WL in TGD. Even the assumption that weak bosons correspond to large heff phase might not be needed but cannot be excluded with further data. The implication would be that the dark proton sequences decay rather rapidly to beta stable nuclei if dark variant of p→ n is possible.

  4. EZs and accompanying flux tubes could be created also in electrolyte: perhaps in the region near cathode, where bubbles are formed. For the flux tubes leading from the system to external world most of the fusion products as well as the liberated nuclear energy would be lost. This could partially explain the poor replicability for the claims about energy production. Some flux tubes could however end at the surface of catalyst under some conditions. Flux tubes could have ends at the catalyst surface. Even in this case the particles emitted in the transformation to ordinary nuclei could be such that they leak out of the system and Holmlid's findings indeed support this possibility.

    If there are negatively charged surfaces present, the flux tubes can end to them since the positively charged dark nuclei at flux tubes and therefore the flux tubes themselves would be attracted by these surfaces. The most obvious candidate is catalyst surface, to which electronic charge waves were assigned by WL. One can wonder whether already Tesla observed in his experiments the leakage of dark matter to various surfaces of the laboratory building. In the collision with the catalyst surface dark nuclei would transform to ordinary nuclei releasing all the ordinary nuclear binding energy. This could create the reported craters at the surface of the target and cause ehating. One cannot of course exclude that nuclear reactions take place between the reaction products and target nuclei. It is quite possible that most dark nuclei leave the system.

    It was in fact Larsen, who realized that there are electronic charge waves propagating along the surface of some catalysts, and for good catalysts such as Gold, they are especially strong. This would suggests that electronic charge waves play a key role in the process. The proposal of WL is that due to the positive electromagnetic interaction energy the dark protons of dark nuclei could have rest mass higher than that of neutron (just as in the ordinary nuclei) and the reaction e+p→ n+ν would become possible.

  5. Spontaneous beta decays of protons could take place inside dark nuclei just as they occur inside ordinary nuclei. If the weak interactions are as strong as electromagnetic interactions, dark nuclei could rapidly transform to beta stable nuclei containing neutrons: this is also a testable prediction. Also dark strong interactions would proceed rather fast and the dark nuclei at magnetic flux tubes could be stable in the final state. If dark stability means same as the ordinary stability then also the isotope shifted nuclei would be stable. There is evidence that this is the case.
Neither "CF" nor "LENR" is appropriate term for TGD inspired option. One would not have ordinary nuclear reactions: nuclei would be created as dark proton sequences and the nuclear physics involved is in considerably smaller energy scale than usually. This mechanism could allow at least the generation of nuclei heavier than Fe not possible inside stars and supernova explosions would not be needed to achieve this. The observation that transmuted nuclei are observed in four bands for nuclear charge Z irrespective of the catalyst used suggest that catalyst itself does not determined the outcome.

One can of course wonder whether even "transmutation" is an appropriate term now. Dark nucleosynthesis, which could in fact be the mechanism of also ordinary nucleosynthesis outside stellar interiors explain how elements heavier than iron are produced, might be more appropriate term.

See the chapter Cold fusion again or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?



Three books about cold fusion/LENR

Steven Krivit has written three books or one book in three parts - as you wish - about cold fusion (shortly CF in the sequel) - or low energy nuclear reaction (LENR) - which is the prevailing term nowadays and preferred by Krivit. The term "cold fusion" can be defended only by historical reasons: the process cannot be cold fusion. LENR relies on Widom-Larsen model (WL) trying to explain the observations using only the existing nuclear and weak interaction physics. Whether LENR is here to stay is still an open question. TGD suggests that even this interpretation is not appropriate: the nuclear physics involved would be dark and associated with heff=n× h phases of ordinary matter having identification as dark matter. Even the term "nuclear transmutation" would be challenged in TGD framework and "dark nuclear synthesis" looks a more appropriate term.

The books were a very pleasant surprise for many reasons, and I have been able to develop my own earlier overall view by adding important details and missing pieces and allowing to understand the relationship to Widom-Larsen model (WL).

1. What the books are about?

There are three books.

  1. "Hacking the atom: Explorations in Nuclear Research, vol I" considers the developments between 1990-2006. The first key theme is the tension between two competing interpretations. On one hand, the interpretation as CF involving necessarily new physics besides ordinary nuclear fusion and plagued by a direct contradiction with the expected signatures of fusion processes, in particular those of D+D→ 4He. On the other hand, the interpretation as LENR in the framework of WL in which no new physics is assumed and neutrons and weak interactions are in a key role.

    Second key theme is the tension between two competing research strategies.

    1. The first strategy tried to demonstrate convincingly that heat is produced in the process - commercial applications was the basic goal. This led to many premature declarations about solution of energy problems within few years and provided excellent weapons for the academic world opposing cold fusion on basis of textbook wisdom.
    2. Second strategy studied the reaction products and demonstrated convincingly that nuclear transmutations (isotopic shifts) took place. This aspect did not receive attention in public and the attempts to ridiculize have directed attention to the first approach and to the use of the term "cold fusion".
    According to Krivit, CF era ended around 2006, when Widom and Larsen proposed their model in which LENR would be the mechanism (see this). Widom-Larsen model (WL) can be however criticized for some un-natural looking assumptions: electron is required to have renormalized mass considerably higher than the real mass; the neutrons initiating nuclear reactions are assumed to have ultralow energies below thermal energy of target nuclei. This requires electron mass to be larger but extremely near to neutron-proton mass difference (see this, this, this, and this). The gamma rays produced in the process are assumed to transform to infrared radiation.

    To my view, WL is not the end of the story. New physics is required. For instance, the work of professor Holmlid and his team (see this and this) has provided new fascinating insights to what might be the mechanism of what has been called nuclear transmutations.

  2. Fusion Fiasco: Explorations in Nuclear Research, vol II" discusses the developments during 1989 when cold fusion was discovered by Fleischman and Pons (see this) and interpreted as CF. It soon turned out that the interpretation has deep problems and CF got the label of pseudoscience.
  3. Lost History: Explorations in Nuclear Research, vol III" tells about surprisingly similar sequence of discoveries, which has been cleaned away from history books of science because it did not fit with the emerging view about nuclear physics and condensed matter physics as completely separate disciplines. Although I had seen some remarks about this era I had not not become aware what really happened. It seems that discoveries can be accepted only when the time is mature for them, and it is far from clear whether the time is ripe even now.
What I say in the sequel necessarily reflects my limitations as a dilettante in the field of LENR/CF. My interest on the topic has lasted for about two decades and comes from different sources: LENR/CF is an attractive application for the unification of fundamental interactions that I have developed for four decades now. This unification predicts a lot of new physics - not only in Planck length scale but in all length scales - and it is of course fascinating to try to understand LENR/CF in this framework.

For instance, while reading the book, I realized that my own references to the literature have been somewhat random and not always appropriate. I do not have any systematic overall view about what has been done in the field: here the book makes wonderful service. It was a real surprise to find that first evidence for transmutation/isotope shifts emerged already for about century ago and also how soon isotope shifts were re-discovered after Pons-Fleischman discovery. The insistence on D+D→ 4He fusion model remains for an outsider as mysterious as the refusal of mainstream nuclear physicists to consider the possibility of new nuclear physics. One new valuable bit of information was the evidence that it is the cathode material that transforms to the isotope shifted nuclei: this helped to develop my own model in more detail.

Remark: A comment concerning the terminology. I agree with the author that cold fusion is not a precise or even correct term. I have myself taken CF as nothing more than a letter sequence and defended this practice to myself as a historical convention. My conviction is that the phenomenon in question is not a nuclear fusion but I am not at all convinced that it is LENR either. Dark nucleosynthesis is my won proposal.

What did I learn from the books?

Needless to say, the books are extremely interesting, for both layman and scientist - say physicist or chemist. The books provide a very thorough view about the history of the subject. There is also an extensive list of references to the literature. Since I am not an experimentalist and feel myself a dilettante in this field as a theoretician, I am unable to check the correctness and reliability of the data represented. In any case, the overall view is consistent with what I have learned about the situation during years. My opinion about WL is however different.

I have been working with ideas related to CF/LENR (or nuclear transmutations) but found books provided also completely new information and I became aware about some new critical points.

I have had a rather imbalanced view about transmutations/isotopic shifts and it was a surprise to see that they were discovered already 1989 when Fleisch and Pons published their work. Even more, the premature discovery of transmutations for century ago (1910-1930) interpreted by Darwin as a collective effect, was new to me. Articles about transmutations were published in prestigious journals like Nature and Naturwissenschaften. The written history is however history of winners and all traces of this episode disappeared from the history books of physics after the standard model of nuclear physics assuming that nuclear physics and condensed matter physics are totally isolated disciplines. The developments after the establishment of standard model relying on GUT paradigm looks to me surprisingly similar.

Sternglass - still a graduate student - wrote around 1947 to Einstein about his preliminary ideas concerning the possibility to transform protons to neutrons in strong electric fields. It became as a surprise to Sternglass that Einstein supported his ideas. I must say that this increased my respect of Einstein even further. Einstein's physical intuition was marvellous. In 1951 Sternglass found that in strong voltages in keV range protons could be transformed to neutrons with unexpectedly high rate. This is strange since the process is kinematically impossible for free protons: it however can be seen as support for WL model.

Also scientists are humans with their human weaknesses and strengths and the history of CF/LENR is full of examples of both light and dark sides of human nature. Researchers are fighting for funding and the successful production of energy was also the dream of many people involved. There were also people, who saw CF/LENR as a quick manner to become millionaire. Getting a glimpse about this dark side was rewarding. The author knows most of the influential people, who have worked in the field and this gives special authenticity to the books.

It was a great service for the reader the basic view about what happened was stated clearly in the introduction. I noticed also that with some background one can pick up any section and start to read: this is a service for a reader like me. I would have perhaps divided the material into separate parts but probably your less bureaucratic choice leaving room for surprise is better after all.

Who should read these books? The books would be a treasure for any physicist ready to challenge the prevailing prejudices and learn about what science is as seen from the kitchen side. Probably this period will be seen in future as very much analogous to the period leading to the birth of atomic physics and quantum theory. Also layman could enjoy reading the books, especially the stories about the people involved - both scientists and those funding the research and academic power holders - are fascinating. The history of cold fusion is a drama in which one can see as fight between Good and Evil and eventually realize that also Good can divide into Good and Evil. This story teaches about a lot about the role of egos in all branches of sciences and in all human activities. Highly rationally behaving science professionals can suddenly start to behave completely irrationally when their egos feel being under threat.

My hope is that the books could wake up the mainstream colleague to finally realize that CF/LENR or - whatever you wish to call it - is not pseudoscience. Most workers in the field are highly competent, intellectually honest, an have had so deep passion for understanding Nature that they have been ready to suffer all the humiliations that the academic hegemony can offer for dissidents. The results about nuclear transmutations are genuine and pose a strong challenge for the existing physics, and to my opinion force to give up the naive reductionistic paradigm. People building unified theories of physics should be keenly aware of these phenomena challenging the reductionistic paradigm even at the level of nuclear and condensed matter physics.

2. The problems of WL

For me the first book representing the state of CF/LENR as it was around 2004 was the most interesting. In his first book Krivit sees 1990-2004 period as a gradual transition from the cold fusion paradigm to the realization that nuclear transmutations occur and the fusion model does not explain this process.

The basic assumption of the simplest fusion model was that the fusion D+D → 4He explains the production of heat. This excluded the possibility that the phenomenon could take place also in light water with deuterium replaced with hydrogen. It however turned out that also ordinary water allows the process. The basic difficulty is of course Coulomb wall but the model has also difficulties with the reaction signatures and the production rate of 4He is too low to explain heat production. Furthermore, gamma rays accompanying 4He production were not observed. The occurrence of transmutations is a further problem. Production of Li was observed already in 1989, and later russia trio Kucherov, Savvatinova, Karabut detected tritium, 4He, and of heavy elements. They also observed modifications at the surface of the cathode down to depth of .1-1 micrometers.

Krivit sees LENR as a more realistic approach to the phenomena involved. In LENR Widom-Larsen model (WL) is the starting point. This would involve no new nuclear physics. I also see WL as a natural starting point but I am skeptic about understanding CF/LENR in term of existing physics. Some new physics seems to be required and I have been doing intense propaganda for a particular kind of new physics colfusion again (see this).

WL assumes that weak process proton (p) → neutron (n) occurring via e+ p→ n+ν (e denotes electron and ν for neutrino) is the key step in cold fusion. After this step neutron finds its way to nucleus easily and the process continues in conventional sense as analog of r-process assumed to give rise to elements heavier than iron in supernova explosions and leads to the observed nuclear transmutations. Essentially one proton is added in each step decomposing to four sub-steps involving beta decay n→ p and its reversal.

There are however problems.

  1. Already the observations of Sternglass suggest that e+ p→ n+ν occurs. e+ p→ n+ν is however kinematically impossible for free particles. e should have considerably higher effective mass perhaps caused by collective many-body effects. e+ p→ n+ν could occur in the negatively charged surface layer of cathode provided the sum of the rest masses of e and p is larger than that of n. This requires rather large renormalization of electron mass claimed to be due to the presence of strong electric fields. Whether there really exists a mechanism increasing the effective mass of electron, is far from obvious and strong nuclear electric fields are proposed to cause this.
  2. Second problematic aspect of WL is the extreme slowness of the rate of beta decay transforming proton to neutron. For ultraslow neutrons the cross section for the absorption of neutron to nucleus increases as 1/vrel, vrel the relative velocity, and in principle could compensate the extreme slowness of the weak decays. The proposal is that neutrons are ultraslow. This is satisfied if the sum of rest masses is only slightly larger than proton mass. One would have mE≈ mn-mp Δ En, where Δ En is the kinetic of neutron. To obtain correct order of magnitude for the rate of neutron absorptions Δ En should be indeed extremely small. One should have Δ E=10-12 eV and one has Δ E/mp= 10-21! This requires fine tuning and it is difficult to believe that the electric field causing the renormalization could be so precisely fine-tuned.

    Δ E corresponds to extremely low temperature about 10-8 K hard to imagine this at room temperature. Thermal energy of the target nucleus at room temperature is of the order 10-11Amp, A mass number. Hence it would seem that the thermal motion of the target nuclei mask the effect.

  3. One should also understand why gamma rays emitted in the ordinary nuclear interactions after neutron absorption are not detected. The proposal is that gamma rays somehow transform to infrared photons, which would cause the heating. This would be a collective effect involving quantum entanglement of electrons. One might hope that by quantum coherence the neutron absorption rate could be proportional to N2 instead of N, where N is the number of nuclei involved. This looks logical but I am not convinced about the physical realizability of this proposal.
To my opinion these objections are really serious.

See the chapter Cold fusion again "Hyper-finite Factors and Dark Matter Hierarchy" or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?



How to demonstrate quantum superposition of classical gravitational fields?

There was rather interesting article in Nature (see this) by Marletto and Vedral about the possibility of demonstrating the quantum nature of gravitational fields by using weak measurement of classical gravitational field affecting it only very weakly. There is also an article in arXiv by the same authors (see this). The approach relies on quantum information theory.

The gravitational field would serve as a measurement interaction and the weak measurements would be applied to gravitational witness serving as probe - the technical term is ancilla. Authors claim that weak measurements giving rise to analog of Zeno effect could be used to test whether the quantum superposition of classical gravitational fields (QSGR) does take place. One can however argue that the extreme weakness of gravitation implies that other interactions and thermal perturbations mask it completely in standard physics framework. Also the decoherence of gravitational quantum states could be argued to make the test impossible.

One must however take these objections with a big grain of salt. After all, we do not have a theory of quantum gravity and all assumptions made about quantum gravity might not be correct. For instance, the vision about reduction to Planck length scale might be wrong. There is also the mystery of dark matter, which might force considerable motivation of the views about dark matter. Furthermore, General Relativity itself has conceptual problems: in particular, the classical conservation laws playing crucial role in quantum field theories are lost. Superstrings were a promising candidate for a quantum theory of gravitation but failed as a physical theory.

In TGD, which was born as an attempt to solve the energy problem of TGD and soon extended to a theory unifying gravitation and standard model interactions and also generalizing string models, the situation might however change. In zero energy ontology (ZEO) the sequence of weak measurements is more or less equivalent to the existence of self identified as generalized Zeno effect! The value of heff/h=n characterizes the flux tubes mediating various interactions and can be very large for gravitational flux tubes (proportional to GMm/v0, where v0<c has dimensions of velocity, and M and m are masses at the ends of the flux tube) with Mm> v0mPl2 (mPl denotes Planck mass) at their ends. This means long coherence time characterized in terms of the scale of causal diamond (CD). The lifetime T of self is proportional to heff so that for gravitational self T is very long as compared to that for electromagnetic self. Selves could correspond sub-selves of self identifiable as sensory mental images so that sensory perception would correspond to weak measurements and for gravitation the times would be long: we indeed feel the gravitational force all the time. Consciousness and life would provide a basic proof for the QSGR (note that large neutron has mass of order Planck mass!).

See the article How to demonstrate quantum superposition of classical gravitational fields? or the chapter Quantum criticality and dark matter.



Anomalous neutron production from an arc current in gaseous hydrogen

I learned about nuclear physics anomaly new to me (actually the anomaly is 64 years old) from an article of Norman and Dunning-Davies in Research Gate (see this). Neutrons are produced from an arc current in hydrogen gas with a rate exceeding dramatically the rate predicted by the standard model of electroweak interactions, in which the production should occur through e-+p→ n+ν by weak boson exchange. The low electron energies make the process also kinematically impossible. Additional strange finding due to Borghi and Santilli is that the neutron production can in some cases be delayed by several hours. Furthermore, according to Santilli neutron production occurs only for hydrogen but not for heavier nuclei.

In the following I sum up the history of the anomaly following closely the representation of Norman and Dunning-Davies (see this): this article gives references and details and is strongly recommended. This includes the pioneering work of Sternglass in 1951, the experiments of Don Carlo Borghi in the late 1960s, and the rather recent experiments of Ruggiero Santilli (see this).

The pioneering experiment of Sternglass

The initial anomalously large production of neutrons using an current arc in hydrogen gas was performed by Earnest Sternglass in 1951 while completing his Ph.D. thesis at Cornell. He wrote to Einstein about his inexplicable results, which seemed to occur in conditions lacking sufficient energy to synthesize the neutrons that his experiments had indeed somehow apparently created. Although Einstein firmly advised that the results must be published even though they apparently contradicted standard theory, Sternglass refused due to the stultifying preponderance of contrary opinion and so his results were preemptively excluded under orthodox pressure within discipline leaving them unpublished. Edward Trounson, a physicist working at the Naval Ordnance Laboratory repeated the experiment and again gained successful results but they too, were not published.

One cannot avoid the question, what physics would look like today, if Sternglass had published or managed to publish his results. One must however remember that the first indications for cold fusion emerged also surprisingly early but did not receive any attention and that cold fusion researchers were for decades labelled as next to criminals. Maybe the extreme conservatism following the revolution in theoretical physics during the first decades of the previous century would have prevented his work to receive the attention that it would have deserved.

The experiments of Don Carlo Borghi

Italian priest-physicist Don Carlo Borghi in collaboration with experimentalists from the University of Recife, Brazil, claimed in the late 1960s to have achieved the laboratory synthesis of neutrons from protons and electrons. C. Borghi, C. Giori, and A. Dall'Olio published 1993 an article entitled "Experimental evidence of emission of neutrons from cold hydrogen plasma" in Yad. Fiz. 56 and Phys. At. Nucl. 56 (7).

Don Borghi's experiment was conducted via a cylindrical metallic chamber (called "klystron") filled up with a partially ionized hydrogen gas at a fraction of 1 bar pressure, traversed by an electric arc with about 500V and 10mA as well as by microwaves with 1010 Hz frequency. Note that the energies of electrons would be below .5 keV and non-relativistic. In the cylindrical exterior of the chamber the experimentalists placed various materials suitable to become radioactive when subjected to a neutron flux (such as gold, silver and others). Following exposures of the order of weeks, the experimentalists reported nuclear transmutations due to a claimed neutron flux of the order of 104 cps, apparently confirmed by beta emissions not present in the original material.

Don Borghi's claim remained un-noticed for decades due to its incompatibility with the prevailing view about weak interactions. The process e-+p→ n+ν is also forbidden by conservation of energy unless the total cm energy of proton and the electron have energy larger than Δ E= mn-mp-me=0.78 MeV. This requires highly relativistic electrons. Also the cross section for the reaction proceeding by exchange of W boson is extremely small at low energies (about 10-20 barn: barn=10-28 m2 represents the natural scale for cross section in nuclear physics). Some new physics must be involved if the effect is real. Situation is strongly reminiscent of cold fusion (or low energy nuclear reactions (LENR), which many main stream nuclear physicists still regard as a pseudoscience.

Santilli's experiments

Ruggero Santilli (see this) replicated the experiments of Don Borghi. Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. Santilli analyzes several alternative proposals explaining the anomalyn and suggests that new spin zero bound state of electron and proton with rest mass below the sum of proton and electron masses and absorbed by nuclei decaying then radioactively could explain the anomaly. The energy needed to overcome the kinematic barrier could come from the energy liberated by electric arc. The problem of the model is that it has no connection with standard model.

Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. According to Santilli: According to Santilli:

" A first series of measurements was initiated with Klystron I on July 28,2006, at 2 p.m. Following flushing of air, the klystron was filled up with commercial grale hydrogen at 25 psi pressure. We first used detector PM1703GN to verify that the background radiations were solely consisting of photon counts of 5-7 μR/h without any neutron count; we delivered a DC electric arc at 27 V and 30 A (namely with power much bigger than that of the arc used in Don Borghi's tests...), at about 0.125" gap for about 3 s; we waited for one hour until the electrodes had cooled down, and then placed detector PM1703GN against the PVC cylinder. This resulted in the detection of photons at the rate of 10 - 15 μR/hr expected from the residual excitation of the tips of the electrodes, but no neutron count at all.

However, about three hours following the test, detector PM1703GN entered into sonic and vibration alarms, specifically, for neutron detections off the instrument maximum of 99 cps at about 5' distance from the klystron while no anomalous photon emission was measured. The detector was moved outside the laboratory and the neutron counts returned to zero. The detector was then returned to the laboratory and we were surprised to see it entering again into sonic and vibrational alarms at about 5' away from the arc chamber with the neutron count off scale without appreciable detection of photons, at which point the laboratory was evacuated for safety.

After waiting for 30 minutes (double neutron's lifetime), we were surprised to see detector PMl703GN go off scale again in neutron counts at a distance of 10' from the experimental set up, and the laboratory was closed for the day."

TGD based model

The basic problems to be solved are following.

  1. What is the role of current arc and other triggering impulses (such as microwave radiation or pressure surge mentioned by Santilli): do they provide energy or do they have some other role?
  2. Neutron production is kinematically impossible if weak interactions mediate it. Even if kinematically possible, weak interaction rates are quite too slow. The creation of intermediate states via other than weak interactions would solve both problems. If weak interactions are involved with the creation of the intermediate states, how there rates can be so high?
  3. What causes the strange delays in the production in some cases but now always? Why hydrogen gas is preferred?
The effect brings strongly in mind cold fusion for which TGD proposes a model (see this) in terms of generation of dark nuclei with non-standard value heff=n× h of Planck constant formed from dark proton sequences at flux tubes. The binding energy for these states is supposed to be much lower than for the ordinary nuclei and eventually these nuclei would decay to ordinary nuclei in collisions with metallic targets attracting positively charged magnetic flux tubes. The energy liberated would be of the essentially the ordinary nuclear binding energy. Note that the creation of dark proton sequences does not require weak interactions so that the basic objections are circumvented.

TGD explanation (see this) could be the same for Tesla's findings, for cold fusion (see this), Pollack effect (see this) and for the anomalous production of neutrons. Even electrolysis would involve in an essential manner Pollack effect and new physics.

Could this model explain the anomalous neuron production and its strange features?

  1. Why electric arc, pressure surge, or microwave radiation would be needed? Dark phases are formed at quantum criticality (see this) and give rise to the long range correlations via quantum entanglement made possible by large heff=n× h. The presence of electron arc occurring as di-electric breakdown is indeed a critical phenomenon.

    Already Tesla discovered strange phenomena in his studies of arc discharges but his discoveries were forgotten by mainstream. TGD explanation (see this) could be the same for Tesla's findings, for cold fusion (see this), Pollack effect (see this) and for the anomalous production of neutrons. Even electrolysis would involve in an essential manner Pollack effect and new physics.

    Also energy feed might be involved. Quite generally, in TGD inspired quantum biology generation of dark states requires energy feed and the role of metabolic energy is to excite dark states. For instance, dark atoms have smaller binding energy and the energies of cyclotron states increase with heff/h. For instance, part of microwave photons could be dark and have much higher energy than otherwise.

    Could the production of dark proton sequences at magnetic flux tubes be all that is needed so that the possible dark variant of the reaction e-+p→ n+ν would not be needed at all?

  2. If also weak bosons appear as dark variants, their Compton length is scaled up accordingly and in scales shorter than the Compton length, they behave effectively as massless particles and weak interactions would become as strong as electromagnetic interactions. This would make possible the decay of dark proton sequences at magnetic flux tubes to beta stable dark isotopes via p→ n+e++ν. Neutrons would be produced in the decays of the dark nuclei to ordinary nuclei liberating nuclear binding energy. Note however that TGD allows also to consider p-adically scaled variants of weak bosons with much smaller mass scale possible important in biology, and one cannot exclude them from consideration.
  3. The reaction e-+p→ n+ν is not necessary in the model. One can however ask, whether there could exist a mechanism making the dark reaction e-+p→ n+ν kinematically possible. If the scale of dark nuclear binding energy is strongly reduced, also p→ n+e++ν in dark nuclei would become kinematically impossible (in ordinary nuclei nuclear binding energy makes n effectively lighter than p).

    TGD based model for nuclei as strings of nucleons (see this and this) connected by neutral or charged (possibly colored) mesonlike bonds with quark and antiquark at its ends could resolve this problem. One could have exotic nuclei in which proton plus negatively charged bond could effectively behave like neutron. Dark weak interactions would take place for neutral bonds between protons and reduce the charge of the bond from q=0 to q= -1 and transform p to effective n. This was assumed also in the model of dark nuclei and also in the model of ordinary nuclei and predicts large number of exotic states. One can of course ask, whether the nuclear neutrons are actually pairs of proton and negatively charged bond.

  4. What about the delays in neutron production occurring in some cases? Why not always? In the situations, when there is a delay in neutron production, the dark nuclei could have rotated around magnetic flux tubes of the magnetic body (MB) of the system before entering to the metal target, one would have a delayed production.
  5. Why would hydrogen be preferred? Why for instance, deuteron and heavier isotopes containing neutrons would not form dark proton sequences at magnetic flux tubes. Why would be the probability for the transformation of say D=pn to its dark variant be very small?

    If the binding energy of dark nuclei per nucleon is several orders of magnitude smaller than for ordinary nuclei, the explanation is obvious. The ordinary nuclear binding energy is much higher than the dark binding energy so that only the sequences of dark protons can form dark nuclei. The first guess (see this) was that the binding energy is analogous to Coulomb energy and thus inversely proportional to the size scale of dark nucleus scaling like h/heff. One can however ask why D with ordinary size could not serve as sub-unit.

For details see the chapter Cold Fusion Again or the article Anomalous neutron production from an arc current in gaseous hydrogen.



Non-local production of photon pairs as support for heff/h=n hypothesis

Again a new anomaly! Photon pairs have been created by a new mechanism. Photons emerge at different points! See this.

Could this give support for the TGD based general model for elementary particle as a string like object (flux tube) with first end (wormhole contact) carrying the quantum numbers - in the case of gauge boson fermion and antifermion at opposite throats of the contact. Second end would carry neutrino-right-handed neutrino pair neutralizing the possible weak isospin. This would give only local decays. Also emissions of photons from charged particle would be local.

Could the bosonic particle be a mixture of two states. For the first state flux tube would have fermion and antifermion at the same end of the fluxtube: only local decays. For the second state fermion and antifermion would reside at the ends of the flux tubes residing at throats associated with different wormhole contacts. This state in state would give rise to non-local two-photon emissions. Mesons of hadron physics would correspond to this kind of states and in old-fashioned hadron physics one speaks about photon-vector meson mixing in the description of the photon-hadron interactions.

If the Planck constant heff/h=n of the emitting particle is large, the distance between photon emissions would be long. The non-local days could make the visible both exotic decay and allow to deduce the value of n! This would how require the transformation of emitted dark photon to ordinary (same would happen when dark photons transform to biophotons).

Can one say anything about the length of fux tube? Magnetic flux tube contains fermionic string. The length of this string is of order Compton length and of the order of p-adic length scale.

What about photon itself - could it have non-local fermion-antifermion decays based on the same mechanism? What the length of photonic string is is not clear. Photon is massless, no scales! One identification of length would be as wavelength defining also the p-adic length scale.

To sum up: the nonlocal decays and emissions could lend strong support for both flux tube identification of particles and for hierarchy of Planck constants. It might be possible to even measure the value of n associated with quantum critical state by detecting decays of this kind.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

For details see the chapter Quantum criticality and dark matter.



Hierarchy of Planck constants, space-time surfaces as covering spaces, and adelic physics

From the beginning it was clear that heff/h=n corresponds to the number of sheets for a covering space of some kind. First the covering was assigned with the causal diamonds. Later I assigned it with space-time surfaces but the details of the covering remained unclear. The final identification emerged only in the beginning of 2017.

Number theoretical universality (NTU) leads to the notion of adelic space-time surface (monadic manifold) involving a discretization in an extension of rationals defining particular level in the hierarchy of adeles defining evolutionary hierarchy. The first formulation was proposed here and more elegant formulation here.

The key constraint is NTU for adelic space-time containing sheets in the real sector and various padic sectors, which are extensions of p-adic number fields induced by an extension of rationals which can contain also powers of a root of e inducing finite-D extension of p-adic numbers (ep is ordinary p-adic number in Qp).

One identifies the numbers in the extension of rationals as common for all number fields and demands that imbedding space has a discretization in an extension of rationals in the sense that the preferred coordinates of imbedding space implied by isometries belong to extension of rationals for the points of number theoretic discretization. This implies that the versions of isometries with group parameters in the extension of rationals act as discrete versions of symmetries. The correspondence between real and p-adic variants of the imbedding space is extremely discontinuous for given adelic imbedding space (there is hierarchy of them with levels characterized by extensions of rationals). Space-time surfaces typically contain rather small set of points in the extension (xn+yn2=zn contains no rationals for n>2!). Hence one expects a discretization with a finite cutoff length at space-time level for sufficiently low space-time dimension D=4 could be enough.

After that one assigns in the real sector an open set to each point of discretization and these open sets define a manifold covering. In p-adic sector one can assign 8:th Cartesian power of ordinary p-adic numbers to each point of number theoretic discretization. This gives both discretization and smooth local manifold structure. What is important is that Galois group of the extension acts on these discretizations and one obtains from a given discretization a covering space with the number of sheets equal to a factor of the order of Galois group, typically equal to the order of Galois.

heff/h=n was identified from the beginning as the dimension of poly-sheeted covering assignable to space-time surface. The number n of sheets would naturally a factor of the order of Galois group implying that heff/h=n is bound to increase during number theoretic evolution so that the algebraic complexity increases. Note that WCW decomposes into sectors corresponding to the extensions of rationals and the dimension of the extension is bound to increase in the long run by localizations to various sectors in self measurements (see this). Dark matter hierarchy represents number theoretical/adelic physics and therefore has now rather rigorous mathematical justification. It is however good to recall that heff/h=n hypothesis emerged from an experimental anomaly: radiation at ELF frequencies had quantal effects of vertebrate brain impossible in standard quantum theory since the energies E=hf of photons are ridiculously small as compared to thermal energy.

Indeed, since n is positive integer evolution is analogous to a diffusion in half-line and n unavoidably increases in the long run just as the particle diffuses farther away from origin (by looking what gradually happens near paper basket one understands what this means). The increase of n implies the increase of maximal negentropy and thus of negentropy. Negentropy Maximization Principle (NMP) follows from adelic physics alone and there is no need to postulate it separately. Things get better in the long run although we do not live in the best possible world as Leibniz who first proposed the notion of monad proposed!

For details see the chapter Quantum criticality and dark matter.



Time crystals, macroscopic quantum coherence, and adelic physics

Time crystals were (see this) were proposed by Frank Wilzek in 2012. The idea is that there is a periodic collective motion so that one can see the system as analog of 3-D crystal with time appearing as fourth lattice dimension. One can learn more about real life time crystals here.

The first crystal was created by Moore et al (see this) and involved magnetization. By adding a periodic driving force it was possible to generate spin flips inducing collective spin flip as a kind of domino effect. The surprise was that the period was twice the original period and small changes of the driving frequency did not affect the period. One had something more than forced oscillation - a genuine time crystal. The period of the driving force - Floquet period- was 74-75 μs and the system is measured for N=100 Floquet periods or about 7.4-7.5 milliseconds (1 ms happens to be of same order of magnitude as the duration of nerve pulse). I failed to find a comment about the size of the system. With quantum biological intuition I would guess something like the size of large neuron: about 100 micrometers.

Second law does not favor time crystals. The time in which single particle motions are thermalized is expected to be rather short. In the case of condensed matter systems the time scale would not be much larger than that for a typical rate of typical atomic transition. The rate for 2P → 1S transition of hydrogen atom estimated here gives a general idea. The decay rate is proportional to ω3d2, where ω= Δ E/hbar is the frequency difference corresponding to the energy difference between the states, d is dipole moment proportional to α a0, a0 Bohr radius and α∼ 1/137 fine structure constant. Average lifetime as inverse of the decay rate would be 1.6 ns and is expected to give a general order of magnitude estimate.

The proposal is that the systems in question emerge in non-equilibrium thermodynamics, which indeed predicts a master-slave hierarchy of time and length scales with masters providing the slowly changing background in which slaves are forced to move. I am not a specialist enough to express any strong opinions about thermodynamical explanation.

What does TGD say about the situation?

  1. So called Anderson localization (see this) is believed to accompany time crystal. In TGD framework this translates to the fusion of 3-surfaces corresponding to particles to single large 3-surface consisting of particle 3-surfaces glued together by magnetic flux tubes. On can say that a relative localization of particles occurs and they more or less lose the relative translation degrees of freedom. This effect occurs always when bound states are formed and would happen already for hydrogen atom.

    TGD vision would actually solve a fundamental problem of QED caused by the assumption that proton and electron behave as independent point like particles: QED predicts a lot of non-existing quantum states since Bethe-Salpeter equation assumes degrees of freedom, which do not actually exist. Single particle descriptions (Schrödinger equation and Dirac equation) treating proton and electron effectively as single particle geometrically (rather than independent particles) having reduced mass gives excellent description whereas QED, which was thought to be something more precise, fails. Quite generally, bound states are not properly understood in QFTs. Color confinement problem is second example about this: usually it is believed that the failure is solely due to the fact that color interaction is strong but the real reason might be much deeper.

  2. In TGD Universe time crystals would be many-particle systems having collection of 3-surfaces connected by magnetic flux tubes (tensor network in terms of condensed matter complexity theory). Magnetic flux tubes would carry dark matter in TGD sense having heff/h=n increasing the quantal scales - both spatial and temporal - so that one could have time crystals in long scales.

    Biology could provide basic examples. For instance, EEG resonance frequency could be associated with time crystals assignable to the magnetic body of brain carrying dark matter with large heff/h=n - so large that dark photon energy E=hefff would correspond to an energy above thermal energy. If bio-photons result from phase transitions heff/h=n→ 1, the energy would be in visible-UV energy range. These frequencies would in turn drive the visible matter in brain and force it to oscillate coherently.

  3. The time crystals claimed by Monroe and Lurkin to be created in laboratory demand a feed of energy (see this) unlike the time crystals proposed by Wilzek. The finding is consistent with the TGD based model. In TGD the generation of large heff phase demands energy. The reason is that the energies of states increase with heff. For instance, atomic binding energies decrease as 1/h2eff. In quantum biology this requires feeding of metabolic energy. Also now interpretation would be analogous to this.
  4. Standard physics view would rely in non-equilibrium thermodynamics whereas TGD view about time crystals would rely on dark matter and hierarchy of Planck constants in turn implied by adelic physics suggested to provide a coherent description fusing real physics as physics of matter and various p-adic physics as physics of cognition.

    Number theoretical universality (NTU) leads to the notion of adelic space-time surface (monadic manifold) involving a discretization in an extension of rationals defining particular level in the hierarchy of adeles defining evolutionary hierarchy. heff/h=n has been identified from the beginning as the dimension of poly-sheeted covering assignable to space-time surface. The action of the Galois group of extensions indeed gives rise to covering space. The number n of sheets would the order of Galois group implying heff/h=n, which is bound to increase during evolution so that the complexity increases.

    Indeed, since n is positive integer evolution is analogous to a diffusion in half-line and n unavoidably increases in the long run just as the particle diffuses farther away from origin (by looking what gradually happens near paper basket one understands what this means). The increase of n implies the increase of maximal negentropy and thus of negentropy. Negentropy Maximization Principle (NMP) follows from adelic physics alone and there is no need to postulate it separately. Things get better in the long run although we do not live in the best possible world as Leibniz who first proposed the notion of monad proposed!

For details see the chapter Quantum criticality and dark matter.



Why metabolism and what happens in bio-catalysis?

TGD view about dark matter gives also a strong grasp to metabolism and bio-catalysis - the key elements of biology.

Why metabolic energy is needed?

The simplest and at the same time most difficult question that innocent student can make about biology class is simple: "Why we must eat?". Or using more physics oriented language: "Why we must get metabolic energy?". The answer of the teacher might be that we do not eat to get energy but to get order. The stuff that we eat contains ordered energy: we eat order. But order in standard physics is lack of entropy, lack of disorder. Student could get nosy and argue that excretion produces the same outcome as eating but is not enough to survive.

We could go to a deeper level and ask why metabolic energy is needed in biochemistry. Suppose we do this in TGD Universe with dark matter identified as phases characterized by heff/h=n.

  1. Why metabolic energy would be needed? Intuitive answer is that evolution requires it and that evolution corresponds to the increase of n=heff/h. To see the answer to the question, notice that the energy scale for the bound states of an atom is proportional to 1/h2 and for dark atom to 1/heff2 ∝ n2 (do not confuse this n with the integer n labelling the states of hydrogen atom!).
  2. Dark atoms have smaller binding energies and their creation by a phase transition increasing the value of n demands a feed of energy - metabolic energy! If the metabolic energy feed stops, n is gradually reduced. System gets tired, loses consciousness, and eventually dies.

    What is remarkable that the scale of atomic binding energies decreases with n only in dimension D=3. In other dimensions it increases and in D=4 one cannot even speak of bound states! This can be easily found by a study of Schrödinger equation for the analog of hydrogen atom in various dimensions. Life based on metabolism seems to make sense only in spatial dimension D=3. Note however that there are also other quantum states than atomic states with different dependence of energy on heff.

Conditions on bio-catalysis

Bio-catalysis is key mechanism of biology and its extreme efficacy remains to be understood. Enzymes are proteins and ribozymes RNA sequences acting as biocatalysts.

What does catalysis demand?

  1. Catalyst and reactants must find each other. How this could happen is very difficult to understand in standard biochemistry in which living matter is seen as soup of biomolecules. I have already already considered the mechanisms making it possible for the reactants to find each other. For instance, in the translation of mRNA to protein tRNA molecules must find their way to mRNA at ribosome. The proposal is that reconnection allowing U-shaped magnetic flux tubes to reconnect to a pair of flux tube connecting mRNA and tRNA molecule and reduction of the value of heff=n× h inducing reduction of the length of magnetic flux tube takes care of this step. This applies also to DNA transcription and DNA replication and bio-chemical reactions in general.
  2. Catalyst must provide energy for the reactants (their number is typically two) to overcome the potential wall making the reaction rate very slow for energies around thermal energy. The TGD based model for the hydrino atom having larger binding energy than hydrogen atom claimed by Randell Mills suggests a solution. Some hydrogen atom in catalyst goes from (dark) hydrogen atom state to hydrino state (state with smaller heff/h and liberates the excess binding energy kicking the either reactant over the potential wall so that reaction can process. After the reaction the catalyst returns to the normal state and absorbs the binding energy.
  3. In the reaction volume catalyst and reactants must be guided to correct places. The simplest model of catalysis relies on lock-and-key mechanism. The generalized Chladni mechanism forcing the reactants to a two-dimensional closed nodal surface is a natural candidate to consider. There are also additional conditions. For instance, the reactants must have correct orientation. For instance, the reactants must have correct orientation and this could be forced by the interaction with the em field of ME involved with Chladni mechanism.
  4. One must have also a coherence of chemical reactions meaning that the reaction can occur in a large volume - say in different cell interiors - simultaneously. Here MB would induce the coherence by using MEs. Chladni mechanism might explain this if there is there is interference of forces caused by periodic standing waves themselves represented as pairs of MEs.
Phase transition reducing the value of heff/h=n as a basic step in bio-catalysis

Hydrogen atom allows also large heff/h=n variants with n>6 with the scale of energy spectrum behaving as (6/n)2 if the n=4 holds true for visible matter. The reduction of n as the flux tube contracts would reduce n and liberate binding energy, which could be used to promote the catalysis.

The notion of high energy phosphate bond is somewhat mysterious concept. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the a state with a smaller value of heff/h and liberate the excess binding energy? Could the phosphorylation of acceptor molecule transfer this dark atom associated with the phosphate of ATP to the acceptor molecule? Could the mysterious high energy phosphate bond correspond to the dark atom state. Metabolic energy would be needed to transform ADP to ATP and would generate dark atom.

Could solar light kick atoms into dark states and in this manner store metabolic energy? Could nutrients carry these dark atoms? Could this energy be liberated as the dark atoms return to ordinary states and be used to drive protons against potential gradient through ATP synthase analogous to a turbine of a power plant transforming ADP to ATP and reproducing the dark atom and thus the "high energy phosphate bond" in ATP? Can one see metabolism as transfer of dark atoms? Could possible negentropic entanglement disappear and emerge again after ADP→ATP.

Here it is essential that the energies of the hydrogen atom depend on hbareff=n× h in as hbareffm, m=-2<0. Hydrogen atoms in dimension D have Coulomb potential behaving as 1/rD-2 from Gauss law and the Schrödinger equation predicts for D≠ 4 that the energies satisfy En∝ (heff/h)m, m=2+4/(D-4). For D=4 the formula breaks since in this case the dependence on hbar is not given by power law. m is negative only for D=3 and one has m=-2. There D=3 would be unique dimension in allowing the hydrino-like states making possible bio-catalysis and life in the proposed scenario.

It is also essential that the flux tubes are radial flux tubes in the Coulomb field of charged particle. This makes sense in many-sheeted space-time: electrons would be associated with a pair formed by flux tube and 3-D atom so that only part of electric flux would interact with the electron touching both space-time sheets. This would give the analog of Schrödinger equation in Coulomb potential restricted to the interior of the flux tube. The dimensional analysis for the 1-D Schrödinger equation with Coulomb potential would give also in this case 1/n2 dependence. Same applies to states localized to 2-D sheets with charged ion in the center. This kind of states bring in mind Rydberg states of ordinary atom with large value of n.

The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane.

For details see the chapter Quantum criticality and dark matter.



NMP and self

The preparation of an article about number theoretic aspects of TGD forced to go through various related ideas and led to a considerable integration of the ideas. In this note ideas related directly to consciousness and cognition are discussed.

  1. Adelic approach strongly suggests the reduction of NMP to number theoretic physics somewhat like second law reduces to probability theory. The dimension of extension rationals characterizing the hierarchy level of physics and defined an observable measured in state function reductions is positive and can only increase in statistical sense. Therefore the maximal value of entanglement negentropy increases as new entangling number theoretic degrees of freedom emerge. heff/h=n identifiable as factor of Galois group of extension characterizes the number of these degrees of freedom for given space-time surfaces as number of its sheets.
  2. State function reduction is hitherto assumed to correspond always to a measurement of density matrix which can be seen as a reaction of subsystem to its environment. This makes perfect sense at space-time level. Higher level measurements occur however at the level of WCW and correspond to a localization to some sector of WCW determining for instance the quantization axes of various quantum numbers. Even the measurement of heff/h=n would measure the dimension of Galois group and force a localization to an extension with Galois group with this dimension. These measurements cannot correspond to measurements of density matrix since different WCW sectors cannot entangle by WCW locality. This finding will be discuss in the following.
Evolution of NMP

The view about Negentropy Maximization Principle (NMP) has co-evolved with the notion of self and I have considered many variants of NMP.

  1. The original formulation of NMP was in positive energy ontology and made same predictions as standard quantum measurement theory. The new element was that the density matrix of sub-system defines the fundamental observable and the system goes to its eigenstate in state function reduction. As found, the localizations at to WCW sectors define what might be called self-measurements and identifiable as active volitions rather than reactions.
  2. In p-adic physics one can assign with rational and even algebraic entanglement probabilities number theoretical entanglement negentropy (NEN) satisfying the same basic axioms as the ordinary Shannon entropy but having negative values and therefore having interpretation as information. The definition of p-adic negentropy (real valued) reads as Sp= -∑ Pklog(|Pk|p), where | . |p denotes p-adic norm. The news is that Np= -Sp can be positive and is positive for rational entanglement probabilities. Real entanglement entropy S is always non-negative.

    NMP would force the generation of negentropic entanglement (NE) and stabilize it. NE resources of the Universe - one might call them Akashic records- would steadily increase.

  3. A decisive step of progress was the realization is that NTU forces all states in adelic physics to have entanglement coefficients in some extension of rationals inducing finite-D extension of p-adic numbers. The same entanglement can be characterized by real entropy S and p-adic negentropies Np, which can be positive. One can define also total p-adic negentropy: N= ∑p Np for all p and total negentropy Ntot=N-S.

    For rational entanglement probabilities it is easy to demonstrate that the generalization of adelic theorem holds true: Ntot=N-S=0. NMP based on Ntot rather than N would not say anything about rational entanglement. For extensions of rationals it is easy to find that N-S>0 is possible if entanglement probabilities are of form Xi/n with |Xi|p=1 and n integer. Should one identify the total negentropy as difference Ntot=N-S or as Ntot=N?

    Irrespective of answer, large p-adic negentropy seems to force large real entropy: this nicely correlates with the paradoxical finding that living systems tend to be entropic although one would expect just the opposite: this relates in very interesting manner to the work of biologists Jeremy England. The negentropy would be cognitive negentropy and not visible for ordinary physics.

  4. The latest step in the evolution of ideas NMP was the question whether NMP follows from number theory alone just as second law follows form probability theory! This irritates theoretician's ego but is victory for theory. The dimension n of extension is positive integer and cannot but grow in statistical sense in evolution! Since one expects that the maximal value of negentropy (define as N-S) must increase with n. Negentropy must increase in long run.
Number theoretic entanglement can be stable

Number theoretical Shannon entropy can serve as a measure for genuine information assignable to a pair of entanglement systems. Entanglement with coefficients in the extension is always negentropic if entanglement negentropy comes from p-adic sectors only. It can be negentropic if negentropy is defined as the difference of p-adic negentropy and real entropy.

The diagonalized density matrix need not belong to the algebraic extension since the probabilities defining its diagonal elements are eigenvalues of the density matrix as roots of N:th order polynomial, which in the generic case requires n-dimensional algebraic extension of rationals. One can argue that since diagonalization is not possible, also state function reduction selecting one of the eigenstates is impossible unless a phase transition increasing the dimension of algebraic extension used occurs simultaneously. This kind of NE could give rise to cognitive entanglement.

There is also a special kind of NE, which can result if one requires that density matrix serves a universal observable in state function reduction. The outcome of reduction must be an eigen space of density matrix, which is projector to this subspace acting as identity matrix inside it. This kind NE allows all unitarily related basis as eigenstate basis (unitary transformations must belong to the algebraic extension). This kind of NE could serve as a correlate for "enlightened" states of consciousness. Schrödingers cat is in this kind of state stably in superposition of dead and alive and state basis obtained by unitary rotation from this basis is equally good. One can say that there are no discriminations in this state, and this is what is claimed about "enlightened" states too.

The vision about number theoretical evolution suggests that NMP forces the generation of NE resources as NE assignable to the "passive boundary of CD for which no changes occur during sequence of state function reductions defining self. It would define the unchanging self as negentropy resources, which could be regarded as kind of Akashic records. During the next "re-incarnation after the first reduction to opposite boundary of CD the NE associated with the reduced state would serve as new Akashic records for the time reversed self. If NMP reduces to the statistical increase of heff/h=n the consciousness information contents of the Universe increases in statistical sense. In the best possible world of SNMP it would increase steadily.

Does NMP reduce to number theory?

The heretic question that emerged quite recently is whether NMP is actually needed at all! Is NMP a separate principle or could NMP reduced to mere number theory? Consider first the possibility that NMP is not needed at all as a separate principle.

  1. The value of heff/h=n should increase in the evolution by the phase transitions increasing the dimension of the extension of rationals. heff/h=n has been identified as the number of sheets of some kind of covering space. The Galois group of extension acts on number theoretic discretizations of the monadic surface and the orbit defines a covering space. Suppose n is the number of sheets of this covering and thus the dimension of the Galois group for the extension of rationals or factor of it.
  2. It has been already noticed that the "big" state function reductions giving rise to death and reincarnation of self could correspond to a measurement of n=heff implied by the measurement of the extension of the rationals defining the adeles. The statistical increase of n follows automatically and implies statistical increase of maximal entanglement negentropy. Entanglement negentropy increases in statistical sense.

    The resulting world would not be the best possible one unlike for a strong form of NMP demanding that negentropy does increaes in "big" state function reductions. n also decrease temporarily and they seem to be needed. In TGD inspired model of bio-catalysis the phase transition reducing the value of n for the magnetic flux tubes connecting reacting bio-molecules allows them to find each other in the molecular soup. This would be crucial for understanding processes like DNA replication and transcription.

  3. State function reduction corresponding to the measurement of density matrix could occur to an eigenstate/eigenspace of density matrix only if the corresponding eigenvalue and eigenstate/eigenspace is expressible using numbers in the extension of rationals defining the adele considered. In the generic case these numbers belong to N-dimensional extension of the original extension. This can make the entanglement stable with respect to state the measurements of density matrix.

    A phase transition to an extension of an extension containing these coefficients would be required to make possible reduction. A step in number theoretic evolution would occur. Also an entanglement of measured state pairs with those of measuring system in containing the extension of extension would make possible the reduction. Negentropy could be reduced but higher-D extension would provide potential for more negentropic entanglement and NMP would hold true in the statistical sense.

  4. If one has higher-D eigen space of density matrix, p-adic negentropy is largest for the entire subspace and the sum of real and p-adic negentropies vanishes for all of them. For negentropy identified as total p-adic negentropy SNMP would select the entire sub-space and NMP would indeed say something explicit about negentropy.
Or is NMP needed as a separate principle?

Hitherto I have postulated NMP as a separate principle. Strong form of NMP (SNMP) states that Negentropy does not decrease in "big" state function reductions corresponding to death and re-incarnations of self.

One can however argue that SNMP is not realistic. SNMP would force the Universe to be the best possible one, and this does not seem to be the case. Also ethically responsible free will would be very restricted since self would be forced always to do the best deed that is increase maximally the negentropy serving as information resources of the Universe. Giving up separate NMP altogether would allow to have also "Good" and "Evil".

This forces to consider what I christened weak form of NMP (WNMP). Instead of maximal dimension corresponding to N-dimensional projector self can choose also lower-dimensional sub-spaces and 1-D sub-space corresponds to the vanishing entanglement and negentropy assumed in standard quantum measurement theory. As a matter fact, this can also lead to larger negentropy gain since negentropy depends strongly on what is the large power of p in the dimension of the resulting eigen sub-space of density matrix. This could apply also to the purely number theoretical reduction of NMP.

WNMP suggests how to understand the notions of Good and Evil. Various choices in the state function reduction would correspond to Boolean algebra, which suggests an interpretation in terms of what might be called emotional intelligence . Also it turns out that one can understand how p-adic length scale hypothesis - actually its generalization - emerges from WNMP.

  1. One can start from ordinary quantum entanglement. It corresponds to a superposition of pairs of states. Second state corresponds to the internal state of the self and second state to a state of external world or biological body of self. In negentropic quantum entanglement each is replaced with a pair of sub-spaces of state spaces of self and external world. The dimension of the sub-space depends on which pair is in question. In state function reduction one of these pairs is selected and deed is done. How to make some of these deeds good and some bad? Recall that WNMP allows only the possibility to generate NNE but does not force it. WNMP would be like God allowing the possibility to do good but not forcing good deeds.

    Self can choose any sub-space of the subspace defined by k≤ N-dimensional projector and 1-D subspace corresponds to the standard quantum measurement. For k=1 the state function reduction leads to vanishing negentropy, and separation of self and the target of the action. Negentropy does not increase in this action and self is isolated from the target: kind of price for sin.

    For the maximal dimension of this sub-space the negentropy gain is maximal. This deed would be good and by the proposed criterion NE corresponds to conscious experience with positive emotional coloring. Interestingly, there are 2k-1 possible choices, which is almost the dimension of Boolean algebra consisting of k independent bits. The excluded option corresponds to 0-dimensional sub-space - empty set in set theoretic realization of Boolean algebra. This could relate directly to fermionic oscillator operators defining basis of Boolean algebra - here Fock vacuum would be the excluded state. The deed in this sense would be a choice of how loving the attention towards system of external world is.

  2. A map of different choices of k-dimensional sub-spaces to k-fermion states is suggestive. The realization of logic in terms of emotions of different degrees of positivity would be mapped to many-fermion states - perhaps zero energy states with vanishing total fermion number. State function reductions to k-dimensional spaces would be mapped to k-fermion states: quantum jumps to quantum states!

    The problem brings in mind quantum classical correspondence in quantum measurement theory. The direction of the pointer of the measurement apparatus (in very metaphorical sense) corresponds to the outcome of state function reduction, which is now 1-D subspace. For ordinary measurement the pointer has k positions. Now it must have 2k-1 positions. To the discrete space of k pointer positions one must assign fermionic Clifford algebra of second quantized fermionic oscillator operators. The hierarchy of Planck constants and dark matter suggests the realization. Replace the pointer with its space-time k-sheeted covering and consider zero energy energy states made of pairs of k-fermion states at the sheets of the n-sheeted covering? Dark matter would be therefore necessary for cognition. The role of fermions would be to "mark" the k space-time sheets in the covering.

The cautious conclusion is that NMP as a separate principle is not necessary and follows in statistical sense from the unavoidable increase of n=heff/h identified as dimension of extension of rationals define the adeles if this extension or at least the dimension of its Galois group is observable.

For details see the chapter Negentropy Maximization Principle or the article Re-examination of the basic notions of TGD inspired theory of consciousness.



WCW and the notion of intentional free will

The preparation of an article about number theoretic aspects of TGD forced to go through various related ideas and led to a considerable integration of the ideas. In this note ideas related directly to consciousness and cognition are discussed.

  1. Adelic approach strongly suggests the reduction of NMP to number theoretic physics somewhat like second law reduces to probability theory. The dimension of extension rationals characterizing the hierarchy level of physics and defined an observable measured in state function reductions is positive and can only increase in statistical sense. Therefore the maximal value of entanglement negentropy increases as new entangling number theoretic degrees of freedom emerge. heff/h=n identifiable as factor of Galois group of extension characterizes the number of these degrees of freedom for given space-time surfaces as number of its sheets.
  2. State function reduction is hitherto assumed to correspond always to a measurement of density matrix which can be seen as a reaction of subsystem to its environment. This makes perfect sense at space-time level. Higher level measurements occur however at the level of WCW and correspond to a localization to some sector of WCW determining for instance the quantization axes of various quantum numbers. Even the measurement of heff/h=n would measure the dimension of Galois group and force a localization to an extension with Galois group with this dimension. These measurements cannot correspond to measurements of density matrix since different WCW sectors cannot entangle by WCW locality. This finding will be discuss in the following.
The notion of self can be seen as a generalization of the poorly defined definition of the notion of observer in quantum physics. In the following I take the role of skeptic trying to be as critical as possible.

The original definition of self was as a subsystem able to remain unentangled under state function reductions associated with subsequent quantum jumps. The density matrix was assumed to define the universal observable. Note that a density matrix, which is power series of a product of matrices representing commuting observables has in the generic case eigenstates, which are simultaneous eigenstates of all observables. Second aspect of self was assumed to be the integration of subsequent quantum jumps to coherent whole giving rise to the experienced flow of time.

The precise identification of self allowing to understand both of these aspects turned out to be difficult problem. I became aware the solution of the problem in terms of ZEO (ZEO) only rather recently (2014).

  1. Self corresponds to a sequence of quantum jumps integrating to single unit as in the original proposal, but these quantum jumps correspond to state function reductions to a fixed boundary of causal diamond CD leaving the corresponding parts of zero energy states invariant - "small" state function reductions. The parts of zero energy states at second boundary of CD change and even the position of the tip of the opposite boundary changes: one actually has wave function over positions of second boundary (CD sizes roughly) and this wave function changes. In positive energy ontology these repeated state function reductions would have no effect on the state (Zeno effect) but in TGD framework there occurs a change for the second boundary and gives rise to the experienced flow of time and its arrow and self: self is generalized Zeno effect.
  2. The first quantum jump to the opposite boundary corresponds to the act of "free will" or birth of re-incarnated self. Hence the act of "free will" changes the arrow of psychological time at some level of hierarchy of CDs. The first reduction to the opposite boundary of CD means "death" of self and "re-incarnation" of time-reversed self at opposite boundary at which the the temporal distance between the tips of CD increases in opposite direction. The sequence of selves and time reversed selves is analogous to a cosmic expansion for CD. The repeated birth and death of mental images could correspond to this sequence at the level of sub-selves.
  3. This allows to understand the relationship between subjective and geometric time and how the arrow of and flow of clock time (psychological time) emerge. The average distance between the tips of CD increases on the average as along as state function functions occur repeatedly at the fixed boundary: situation is analogous to that in diffusion. The localization of contents of conscious experience to boundary of CD gives rise to the illusion that universe is 3-dimensional. The possibility of memories made possibly by hierarchy of CDs demonstrates that this is not the case. Self is simply the sequence of state function reductions at same boundary of CD remaining fixed and the lifetime of self is the total growth of the average temporal distance between the tips of CD.
One can identify several rather abstract state function reductions selecting a sector of WCW.
  1. There are quantum measurements inducing localization in the moduli space of CDs with passive boundary and states at it fixed. In particular, a localization in the moduli characterizing the Lorentz transform of the upper tip of CD would be measured. The measured moduli characterize also the analog of symplectic form in M4 strongly suggested by twistor lift of TGD - that is the rest system (time axis) and spin quantization axes. Of course, also other kinds of reductions are possible.
  2. Also a localization to an extension of rationals defining the adeles should occur. Could the value of n=heff/h be observable? The value of n for given space-time surface at the active boundary of CD could be identified as the order of the smallest Galois group containing all Galois groups assignable to 3-surfaces at the boundary. The superposition of space-time surface would not be eigenstate of n at active boundary unless localization occurs. It is not obvious whether this is consistent with a fixe value of n at passive boundary.

    The measured value of n could be larger or smaller than the value of n at the passive boundary of CD but in statistical sense n would increase by the analogy with diffusion on half line defined by non-negative integers. The distance from the origin unavoidably increases in statistical sense. This would imply evolution as increase of maximal value of negentropy and generation of quantum coherence in increasingly longer scales.

  3. A further abstract choice corresponds to the the replacement of the roles of active and passive boundary of CD changing the arrow of clock time and correspond to a death of self and re-incarnation as time-reversed self.
Can one assume that these measurements reduce to measurements of a density matrix of either entangled system as assumed in the earlier formulation of NMP, or should one allow both options. This question actually applies to all quantum measurements and leads to a fundamental philosophical questions unavoidable in all consciousness theories.
  1. Do all measurements involve entanglement between the moduli or extensions of two CDs reduced in the measurement of the density matrix? Non-diagonal entanglement would allow final states states, which are not eigenstates of moduli or of n: this looks strange. This could also lead to an infinite regress since it seems that one must assume endless hierarchy of entangled CDs so that the reduction sequence would proceed from top to bottom. It looks natural to regard single CD as a sub-Universe.

    For instance, if a selection of quantization axis of color hypercharge and isospin (localization in the twistor space of CP2) is involved, one would have an outcome corresponding to a quantum superposition of measurements with different color quantization axis!

    Going philosophical, one can also argue, that the measurement of density matrix is only a reaction to environment and does not allow intentional free will.

  2. Can one assume that a mere localization in the moduli space or for the extension of rationals (producing an eigenstate of n) takes place for a fixed CD - a kind of self measurement possible for even unentangled system? If there is entanglement in these degrees of freedom between two systems (say CDs), it would be reduced in these self measurements but the outcome would not be an eigenstate of density matrix. An interpretation as a realization of intention would be approriate.
  3. If one allows both options, the interpretation would be that state function reduction as a measurement of density matrix is only a reaction to environment and self-measurement represents a realization of intention.
  4. Self measurements would occur at higher level say as a selection of quantization axis, localization in the moduli space of CD, or selection of extension of rationals. A possible general rule is that measurements at space-time level are reactions as measurements of density matrix whereas a selection of a sector of WCW would be an intentional action. This because formally the quantum states at the level of WCW are as modes of classical WCW spinor field single particle states. Entanglement between different sectors of WCW is not possible.
  5. If the selections of sectors of WCW at active boundary of CD commute with observables, whose eigenstates appear at passive boundary (briefly passive observables) meaning that time reversal commutes with them - they can occur repeatedly during the reduction sequence and self as a generalized Zeno effect makes sense.

    If the selections of WCW sectors at active boundary do not commute with passive observables then volition as a choice of sector of WCW must change the arrow of time. Libet's findings show that conscious choice induces neural activity for a fraction of second before the conscious choice. This would imply the correspondences "big" measurement changing the arrow of time - self-measurement at the level of WCW - intentional action and "small" measurement - measurement at space-time level - reaction.

    Self as a generalized Zeno effect makes sense only if there are active commuting with passive observables. If the passive observables form a maximal set, the new active observables commuting with them must emerge. The increase of the size of extension of rationals might generate them by expanding the state space so that self would survive only as long at it evolves. Self would die and re-incarnate when it could not generate any new observables communicating with those assignable to active boundary to be measured. From personal experience I can say that ageing is basically the loss of the ability to make new choices. When all possible choices are made, all observables are measured or self-measured, it is time to start again.

    Otherwise there would be only single unitary time evolution followed by a reduction to opposite boundary. This makes sense only if the sequence of "big" reductions for sub-selves can give rise to the time flow experienced by self: the birth and death of mental images would give rise to flow of time of self.

The overall conclusion is that the notion of WCW is necessary to understand intentional free will. One must distinguish between measurements at WCW level as localizations, which do not involved measurement of density matrix and measurements space-time level reducible to measurements of density matrix (taking the density matrix to be function of product of commuting observables one can measure all these observables simultaneously by measuring density matrix. WCW localizations correspond to intentional actions - say decision fixing quantization axis for spin and space-time reductions correspond to state function reductions at the level of matter. By reading Krishnamurti I learned that eastern philosophies make a sharp distinction between behavior as mere reactivity and behavior as intentional actions which are not reactions. Furthermore, death and reincarnation happen when self has made all choices.

For details see the chapter Negentropy Maximization Principle or the article Re-examination of the basic notions of TGD inspired theory of consciousness.



Anomalies of water as evidence for dark matter in TGD sense

The motivation for this brief comment came from a popular article telling that a new phase of water has been discovered in the temperature range 50-60 oC (see this ). Also Gerald Pollack (see this ) has introduced what he calls the fourth phase of water. For instance, in this phase water consists of hexagonal layers with effective H1.5O stoichiometry and the phase has high negative charge. This phase plays a key role in TGD based quantum biology. These two fourth phases of water could relate to each other if there exist a deeper mechanism explaining both these phases and various anomalies of water.

Martin Chaplin (see this ) has an extensive web page about various properties of water. The physics of water is full of anomalous features and therefore the page is a treasure trove for anyone ready to give up the reductionistic dogma. The site discusses the structure, thermodynamics, and chemistry of water. Even academically dangerous topics such as water memory and homeopathy are discussed.

One learns from this site that the physics of water involves numerous anomalies (see this ). The structural, dynamic and thermodynamic anomalies form a nested in density-temperature plane. For liquid water at atmospheric pressure of 1 bar the anomalies appear in the temperature interval 0-100 oC.

Hydrogen bonding creating a cohesion between water molecules distinguishes water from other substances. Hydrogen bonds induce the clustering of water molecules in liquid water. Hydrogen bonding is also highly relevant for the phase diagram of H2O coding for various thermodynamical properties of water (see this ). In biochemistry hydrogen bonding is involved with hydration. Bio-molecules - say amino-acids - are classified to hydrophobic, hydrophilic, and amphiphilic ones and this characterization determines to a high extent the behavior of the molecule in liquid water environment. Protein folding represents one example of this.

Anomalies are often thought to reduce to hydrogen bonding. Whether this is the case, is not obvious to me and this is why I find water so fascinating substance.

TGD indeed suggests that water decomposes into ordinary water and dark water consisting of phases with effective Planck constant heff=n× h residing at magnetic flux tubes. Hydrogen bonds would be associated with short and rigid flux tubes but for larger values of n the flux tubes would be longer by factor n and have string tension behaving as 1/n so that they would softer and could be loopy. The portional of water molecules connected by flux tubes carrying dark matter could be identified as dark water and the rest would be ordinary water. This model allows to understand various anomalies. The anomalies are largest at the physiological temperature 37 C, which conforms with the vision about the role of dark matter and dark water in living matter since the fraction of dark water would be highest at this temperature. The anomalies discussed are density anomalies, anomalies of specific heat and compressibility, and Mpemba effect. I have discussed these anomalies already for decade ago. The recent view about dark matter allows however much more detailed modelling.

For details see the chapter Dark Nuclear Physics and Condensed Matter or the article The anomalies of water as evidence for the existence of dark matter in TGD sense.



About number theoretic aspects of NMP

There is something in NMP that I still do not understand: every time I begin to explain what NMP is I have this unpleasant gut feeling. I have the habit of making a fresh start everytime rather than pretending that everything is crystal clear. I have indeed considered very many variants of NMP. In the following I will consider two variants of NMP. Second variant reduces to a pure number theory in adelic framework inspired by number theoretic vision. It is certainly the simplest one since it says nothing explicit about negentropy. Second variant says essentially the as "strong form of NMP", when the reduction occurs to an eigen-space of density matrix.

I will not consider zero energy ontology (ZEO) related aspects and the aspects related to the hierarchy of subsystems and selves since I dare regard these as "engineering" aspects.

What NMP should say?

What NMP should state?

  1. NMP takes in some sense the role of God and the basic question is whether we live in the best possible world or not. Theologists asks why God allows sin. I ask whether NMP demand increase of negentropy always or does it allow also reduction of negentropy? Why? Could NMP lead to increase of negentropy only in statistical sense - evolution? Could it only give potential for gaining a larger negentropy?

    These questions have turned to be highly non-trivial. My personal experience is that we do not live in the best possible world and this experience plus simplicity motivates the proposal to be discussed.

  2. Is NMP a separate principle or could NMP be reduced to mere number theory? For the latter option state function would occur to an eigenstate/eigenspace of density matrix only if the corresponding eigenvalue and eigenstate/eigenspace are expressible using numberes in the extension of rationals defining the adele considered. A phase transition to an extension of an extension containing these coefficients would be required to make possible reduction. A step in number theoretic evolution would occur. Also an entanglement of measured state pairs with those of measuring system in containing the extension of extension would make possible the reduction. Negentropy would be reduced but higher-D extension would provide potential for more negentropic entanglement. I will consider this option in the following.
  3. If one has higher-D eigenspace of density matrix, p-adic negentropy is largest for the entire subspace and the sum of real and p-adic negentropies vanishes for all of them. For negentropy identified as total p-adic negentropy strong from of NMP would select the entire sub-space and NMP would indeed say something explicit about negentropy.

The notion of entanglement negentropy

  1. Number theoretic universality demands that density matrix and entanglement coefficients are numbers in an algebraic extension of rationals extended by adding root of e. The induced p-adic extensions are finite-D and one obtains adele assigned to the extension of rationals. Real physics is replaced by adelic physics.
  2. The same entanglement in coefficients in extension of rationals can be seen as numbers is both real and various p-adic sectors. In real sector one can define real entropy and in various p-adic sectors p-adic negentropies (real valued).
  3. Question: should one define total entanglement negentropy as
    1. sum of p-adic negentropies or
    2. as difference for the sum of p-adic negentropies and real etropy. For rational entanglement probabilities real entropy equals to the sum of p-adic negentropies and total negentropy would vanish. For extensions this negentropy would be positive under natural additional conditions as shown earlier.
    Both options can be considered.

State function reduction as universal measurement interaction between any two systems

  1. The basic vision is that state function reductions occur all the for all kinds of matter and involves a measurement of density matrix ρ characterizing entanglement of the system with environment leading to a sub-space for which states have same eigenvalue of density matrix. What this measurement really is is not at all clear.
  2. The measurement of the density matrix means diagonalization of the density matrix and selection of an eigenstate or eigenspace. Diagonalization is possible without going outside the extension only if the entanglement probabilities and the coefficients of states belong to the original extension defining the adele. This need not be the case!

    More precisely, the eigenvalues of the density matrix as roots of N:th order polynomial with coefficients in extension in general belong to N-D extension of extension. Same about the coefficients of eigenstates in the original basis. Consider as example the eigen values and eigenstates of rational valued N× N entanglement matrix, which are roots of a polynomial of degree N and in general algebraic number.

    Question: Is state function reduction number theoretically forbidden in the generic case? Could entanglement be stable purely number theoretically? Could NMP reduce to just this number theoretic principle saying nothing explicit about negentropy? Could phase transition increasing the dimension of extension but keeping the entanglement coefficients unaffected make reduction possible. Could entanglement with an external system in higher-D extension -intelligent observer - make reduction possible?

  3. There is a further delicacy involved. The eigen-space of density matrix can be N-dimensional if the density matrix has N-fold degenerate eigenvalue with all N entanglement probabilities identical. For unitary entanglement matrix the density matrix is indeed N×N unit matrix. This kind of NE is stable also algebraically if the coefficients of eigenstates do not belong to the extension. If they do not belong to it then the question is whether NMP allows a reduction to subspace of and eigen space or whether only entire subspace is allowed.

    For total negentropy identified as the sum of real and p-adic negentropies for any eigenspace would vanish and would not distinguish between sub-spaces. Identification of negentropy as as p-adic negentropy would distinguish between sub-spaces and´NMP in strong form would not allow reduction to sub-spaces. Number theoretic NMP would thus also say something about negentropy.

    I have also consider the possibility of weak NMP. Any subspace could be selected and negentropy would be reduced. The worst thing to do in this case would be a selection of 1-D subspace: entanglement would be totally lost and system would be totally isolated from the rest of the world. I have proposed that this possibility corresponds to the fact that we do not seem to live in the best possible world.

NMP as a purely number theoretic constraint?

Let us consider the possibility that NMP reduces to the number theoretic condition tending to stabilize generic entanglement.

  1. Density matrix characterizing entanglement with the environment is a universal observable. Reduction can occur to an eigenspace of the density matrix. For rational entanglement probabilities the total negentropy would vanish so that NMP formulated in terms of negentropy cannot say anything about the situation. This suggests that NMP quite generally does not directly refer to negentropy.
  2. The condition that eigenstates and eigenvalues are in the extension of rationals defining the adelic physics poses a restriction. The reduction could occur only if these numbers are in the original extension. Also rational entanglement would be stable in the generic case and a phase transition to higher algebraic extension is required for state function reduction to occur. Standard quantum measurement theory would be obtained when the coefficients of eigenstates and entanglement probabilities are in the original extension.
  3. If this is not the case, a phase transition to an extension of extension containing the N-D extension of it could save the situation. This would be a step in number theoretic evolution. Reduction would lead to a reduction of negentropy but would give potential for gaining a larger entanglement negentropy. Evolution would proceed through catastrophes giving potential for more negentropic entanglement! This seems to be the case!

    Alternatively, the state pairs of the system + complement could be entangled with observer in an extension of rationals containg the needed N-D extension of extension and state function possible for observer would induce reduction in the original system. This would mean fusion with a self at higher level of evolutionary hierarchy - kind of enlightment. This would give an active role to the intelligent observer (intelligence characterized by the dimension of extension of rationals). Intelligent observer would reduce the negentropy and thus NMP would not hold true universally.

    Since higher-D extension allows higher negentropy and in the generic case NE is stable, one might hope that NMP holds true statistically (for rationals total negentropy as sum or real and total p-adic negentropies vanishes).

    The Universe would evolve rather than being a paradize: the number theoretic NMP would allow temporary reduction of negentropy but provide a potential for larger negentropy and the increase of negentropy in statistical sense is highly suggestive. To me this option looks like simplest and most realistic one.

  4. If negentropy is identified as total p-adic negentropy rather than sum of real and p-adic negentropies, strong form of NMP says something explicit about negentropy: the reduction would take place to the entire subspace having the largest p-adic negentropy.

For background see the chapter Negentropy Maximization Principle. or the article About number theoretic aspects of NMP.



To the index page