What's new in


Note: Newest contributions are at the top!

Year 2017

More about dark nucleosynthesis

In the sequel a more detailed view about dark nucleosynthesis is developed using the information provided by the first book of Krivit. This information allows to make also the nuclear string model much more detailed and connect CF/LENR with co called X boson anomaly and other nuclear anomalies.

1. Not only sequences of dark protons but also of dark nucleons are involved

Are only dark protons sequences at magnetic flux tubes involved or can these sequences consists of nuclei so that one would have nucleus consisting of nuclei? From the first book I learned, that the experiments of Urutskoev demonstrate that there are 4 peaks for the production rate of elements as function of atomic number Z. Furthermore, the amount of mass assignable to the transmuted elements is nearly the mass lost from the cathode. Hence also cathode nuclei should end up to flux tubes.

  1. Entire target nuclei can become dark in the sense described and end up to the same magnetic flux tubes as the protons coming from bubbles of electrolyte, and participate in dark nuclear reactions with the incoming dark nuclei: the dark nuclear energy scale would be much smaller than MeV. For heavy water electrolyte D must become dark nucleus: the distance between p and n inside D would be usual. A natural expectation is that the flux tubes connect the EZs and cathode.

    In the transformation to ordinary nuclear matter these nuclei of nuclei would fuse to ordinary nuclei and liberate nuclear energy associated with the formation of ordinary nuclear bonds.

  2. The transformation of protons to neutrons in strong electric fields observed already by Sternglass in 1951 could be understood as a formation of flux tubes containing dark nuclei and producing neutrons in their decays to ordinary nuclei. The needed voltages are in kV range suggesting that the scale of dark nuclear binding energy is of order keV implying heff/h=n∼ 211 - roughly the ratio mp/me.
  3. Remarkably, also in ordinary nuclei the flux tubes connecting nucleons to nuclear string would be long, much longer than the nucleon Compton length (see this and this). By ordinary Uncertainty Principle (heff=h) the length of flux tube to which binding energy is assigned would correspond to the size of nuclear binding energy scale of order few MeV. This would be also the distance between dark heff=n× h nuclei forming dark nuclear string! The binding energy would be scaled down by 1/n.

    This suggests that n→ 1 phase transition does not affect the lengths of flux tubes but only turns them to loops and that the distance between nucleons as measured in M4× CP2 is therefore scaled down by 1/n. Coulomb repulsion between proton does not prevent this if the electric flux between protons is channelled along the long flux tubes rather than along larger space-time sheet so that the repulsive Coulomb interaction energy is not affected in the phase transition! This line of thought obviously involves the notion of space-time as a 4-surface in crucial manner.

  4. Dark nuclei could have also ordinary nuclei as building bricks in accordance with fractality of TGD. Nuclei at dark flux tubes would be ordinary and the flux tubes portions - bonds - between them would have large heff and ahve thus length considerably longer than in ordinary nuclei. This would give sequences of ordinary nuclei with dark binding energy: similar situation is actually assumed to hold true for the nucleons of ordinary nuclei connected by analogs of dark mesons with masses in MeV range (see this).
Remark: In TGD inspired model for quantum biology dark variants of biologically important ions are assumed to be present. Dark proton sequences having basic entangled unit consisting of 3 protons analogous to DNA triplet would represent analogs of DNA, RNA, amino-acids and tRNA (see this). Genetic code would be realized already at the level of dark nuclear physics and bio-chemical realization would represent kind of shadow dynamics. The number of dark codons coding for given dark amino-acid would be same as in vertebrate genetic code.

2. How dark nuclei are transformed to ordinary nuclei?

What happens in the transformation of dark nuclei to ordinary ones? Nuclear binding energy is liberated but how does this occur? If gamma rays generated, one should invent also now a mechanism transforming gamma rays to thermal radiation. The findings of Holmlid provide valuable information here and lead to a detailed qualitative view about process and also allow to sharpen the model for ordinary nuclei.

  1. Holmlid (see this and this) has reported rather strange finding that muons (mass 106 MeV) pions (mass 140 MeV) and even kaons (mass 497) MeV are emitted in the process. This does not fit at all to ordinary nuclear physics with natural binding energy scale of few MeVs. It could be that a considerable part of energy is liberated as mesons decaying to lepton pairs (pions also to gamma pairs) but with energies much above the upper bound of about 7 MeV for the range of energies missing from the detected gamma ray spectrum (this is discussed in the first part of the book of Krivit). As if hadronic interactions would enter the game somehow! Even condensed matter physics and nuclear physics in the same coffee table are too much for mainstream physicist!
  2. What happens when the liberated total binding energy is below pion mass? There is experimental evidence for what is called X boson (see this) discussed from TGD point of view here. In TGD framework X is identified as a scaled down variant π(113) of ordinary pion π=π(107). X is predicted to have mass of m(π(113))= 2(113-107)/2m(π)≈ 16.68 MeV, which conforms with the mass estimate for X boson. Note that k=113 resp. k=117 corresponds to nuclear resp. hadronic p-adic length scale. For low mass transmutations the binding energy could be liberated by emission of X bosons and gamma rays.
  3. I have also proposed that pion and also other neutral pseudo-scalar states could have p-adically scaled variants with masses differing by powers of two. For pion the scaled variants would have masses 8.5 MeV, m(π(113))= 17 MeV, 34 MeV, 68 MeV, m(π(107))= 136 MeV, ... and also these could be emitted and decay to lepton pairs of gamma pairs (see this). The emission of scaled pions could be faster process than emission of gamma rays and allow to emit the binding energy with minimum number of gamma rays.
There is indeed evidence for pion like states (for TGD inspired comments (see this).
  1. The experimental claim of Tatischeff and Tomasi-Gustafsson is that pion is accompanied by pion like states organized on Regge trajectory and having mass 60, 80, 100, 140, 181, 198, 215, 227.5, and 235 MeV.
  2. A further piece of evidence for scaled variants of pion comes from two articles by Eef van Beveren and George Rupp. The first article is titled First indications of the existence of a 38 MeV light scalar boson. Second article has title Material evidence of a 38 MeV boson.
The above picture suggests that the pieces of dark nuclear string connecting the nucleons are looped and nucleons collapse to a nucleus sized region. On the other, the emission of mesons suggests that these pieces contract to much shorter pieces with length of order Compton length of meson responsible for binding and the binding energy is emitted as single quantum or very few quanta. Strings cannot however retain their length (albeit becoming looped with ends very near in M4× CP2) and contract at the same time! How could one unify these two conflicting pictures?
  1. To see how TGD could solve the puzzle, consider what elementary particles look like in TGD Universe (see this). Elementary particles are identified as two-sheeted structures consisting of two space-time sheets with Minkowskian signature of the induced metric connected by CP2 sized wormhole contacts with Euclidian signature of induced metric. One has a pair of wormhole contacts and both of them have two throats analogous to blackhole horizons serving as carriers of elementary particle quantum numbers.

    Wormhole throats correspond to homologically trivial 2-surfaces of CP2 being therefore Kähler magnetically charged monopole like entities. Wormhole throat at given space-time sheet is necessarily connected by a monopole flux tube to another throat, now the throat of second wormhole contact. Flux tubes must be closed and therefore consist of 2 "long" pieces connecting wormhole throats at different parallel space-time sheets plus 2 wormhole contacts of CP2 size scale connecting these pieces at their ends. The structure resembles extremely flattened rectangle.

  2. The alert reader can guess the solution of the puzzle now. The looped string corresponds to string portion at the non-contracted space-time sheet and contracted string to that at contracted space-time sheet! The first sheet could have ordinary value of Planck constant but larger p-adic length scale of order electron's p-adic length scale L(127) (it could correspond to the magnetic body of ordinary nucleon (see this)) and second sheet could correspond to heff=n× h dark variant of nuclear space-time sheet with n=2111 so that the size scales are same.

    The phase transition heff→ h occurs only for the flux tubes of the second space-time sheet reducing the size of this space-time sheet to that of nuclear k=137 space-time sheet of size of ∼ 10-14 meters. The portions of the flux tubes at this space-time sheet become short, at most of the order of nuclear size scale, which roughly corresponds to pion Compton length. The contraction is accompanied by the emission of the ordinary nuclear binding energy as pions, their scaled variants, and even heavier mesons. This if the mass of the dark nucleus is large enough to guarantee that total binding energy makes the emission possible. The second space-time sheet retains its size but the flux tubes at it retain their length but become loopy since their ends must follow the ends of the shortened flux tubes.

  3. If this picture is correct, most of the energy produced in the process could be lost as mesons, possibly also their scaled variants. One should have some manner to prevent the leakage of this energy from the system in order to make the process effective energy producer.
This is only rough overall view and it would be unrealistic to regard it as final: one can indeed imagine variations. But even its recent rough form it seems to be able explain all the weird looking aspects of CF/LENR/dark nucleosynthesis.

See the chapter Cold fusion again or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?

Comparison of Widom-Larsen model with TGD inspired models of CF/LENR or whatever it is

I cannot avoid the temptation to compare WL to my own dilettante models for which also WL has served as an inspiration. I have two models explaining these phenomena in my own TGD Universe. Both models rely on the hierarchy of Planck constants heff=n× h (see this and this ) explaining dark matter as ordinary matter in heff=n× h phases emerging at quantum criticality. heff implies scaled up Compton lengths and other quantal lengths making possible quantum coherence is longer scales than usually.

The hierarchy of Planck constants heff=n× h has now rather strong theoretical basis and reduces to number theory (see this). Quantum criticality would be essential for the phenomenon and could explain the critical doping fraction for cathode by D nuclei. Quantum criticality could help to explain the difficulties to replicate the effect.

1. Simple modification of WL does not work

The first model is a modification of WL and relies on dark variant of weak interactions. In this case LENR would be appropriate term.

  1. Concerning the rate of the weak process e+p→ n+ν the situation changes if heff is large enough and rather large values are indeed predicted. heff could be large also for weak gauge bosons in the situation considered. Below their Compton length weak bosons are effectively massless and this scale would scale up by factor n=heff/h to almost atomic scale. This would make weak interactions as strong as electromagnetic interactions and long ranged below the Compton length and the transformation of proton to neutron would be a fast process. After that a nuclear reaction sequence initiated by neutron would take place as in WL. There is no need to assume that neutrons are ultraslow but electron mass remains the problem. Note that also proton mass could be higher than normal perhaps due to Coulomb interactions.
  2. As such this model does not solve the problem related to the too small electron mass. Nor does it solve the problem posed by gamma ray production.

2. Dark nucleosynthesis

Also second TGD inspired model involves the heff hierarchy. Now LENR is not an appropriate term: the most interesting things would occur at the level of dark nuclear physics, which is now a key part of TGD inspired quantum biology.

  1. One piece of inspiration comes from the exclusion ones (EZs) of Pollack (see this) (see this and this), which are negatively charged regions (see this, this, and this).

    Also the work of the group of Prof. Holmlid (see this and this) not yet included in the book of Krivit was of great help. TGD proposal (see this and this) is that protons causing the ionization go to magnetic flux tubes having interpretation in terms of space-time topology in TGD Universe. At flux tubes they have heff=n× h and form dark variants of nuclear strings, which are basic structures also for ordinary nuclei but would have almost atomic size scale now.

  2. The sequences of dark protons at flux tubes would give rise to dark counterparts of ordinary nuclei proposed to be also nuclear strings but with dark nuclear binding energy, whose scale is measured using as natural unit MeV/n, n=heff/h, rather than MeV. The most plausible interpretation is that the field body/magnetic body of the nucleus has heff= n× h and is scaled up in size. n=211 is favoured by the fact that from Holmlid's experiments the distance between dark protons should be about electron Compton length.

    Besides protons also deuterons and even heavier nuclei can end up to the magnetic flux tubes. They would however preserve their size and only the distances between them would be scaled to about electron Compton length on basis of the data provided by Holmlid's experiments (see this and this).

    The reduced binding energy scale could solve the problems caused by the absence of gamma rays: instead of gamma rays one would have much less energetic photons, say X rays assignable to n=211 ≈ mp/me. For infrared radiation the energy of photons would be about 1 eV and nuclear energy scale would be reduced by a factor about 10-6-10-7: one cannot exclude this option either. In fact, several options can be imagined since entire spectrum of heff is predicted. This prediction is a testable.

    Large heff would also induce quantum coherence is a scale between electron Compton length and atomic size scale.

  3. The simplest possibility is that the protons are just added to the growing nuclear string. In each addition one has (A,Z)→ (A+1,Z+1) . This is exactly what happens in the mechanism proposed by Widom and Larsen for the simplest reaction sequences already explaining reasonably well the spectrum of end products.

    In WL the addition of a proton is a four-step process. First e+p→ n+ν occurs at the surface of the cathode. This requires large electron mass renormalization and fine tuning of the electron mass to be very nearly equal but higher than n-p mass difference.

    There is no need for these questionable assumptions of WL in TGD. Even the assumption that weak bosons correspond to large heff phase might not be needed but cannot be excluded with further data. The implication would be that the dark proton sequences decay rather rapidly to beta stable nuclei if dark variant of p→ n is possible.

  4. EZs and accompanying flux tubes could be created also in electrolyte: perhaps in the region near cathode, where bubbles are formed. For the flux tubes leading from the system to external world most of the fusion products as well as the liberated nuclear energy would be lost. This could partially explain the poor replicability for the claims about energy production. Some flux tubes could however end at the surface of catalyst under some conditions. Flux tubes could have ends at the catalyst surface. Even in this case the particles emitted in the transformation to ordinary nuclei could be such that they leak out of the system and Holmlid's findings indeed support this possibility.

    If there are negatively charged surfaces present, the flux tubes can end to them since the positively charged dark nuclei at flux tubes and therefore the flux tubes themselves would be attracted by these surfaces. The most obvious candidate is catalyst surface, to which electronic charge waves were assigned by WL. One can wonder whether already Tesla observed in his experiments the leakage of dark matter to various surfaces of the laboratory building. In the collision with the catalyst surface dark nuclei would transform to ordinary nuclei releasing all the ordinary nuclear binding energy. This could create the reported craters at the surface of the target and cause ehating. One cannot of course exclude that nuclear reactions take place between the reaction products and target nuclei. It is quite possible that most dark nuclei leave the system.

    It was in fact Larsen, who realized that there are electronic charge waves propagating along the surface of some catalysts, and for good catalysts such as Gold, they are especially strong. This would suggests that electronic charge waves play a key role in the process. The proposal of WL is that due to the positive electromagnetic interaction energy the dark protons of dark nuclei could have rest mass higher than that of neutron (just as in the ordinary nuclei) and the reaction e+p→ n+ν would become possible.

  5. Spontaneous beta decays of protons could take place inside dark nuclei just as they occur inside ordinary nuclei. If the weak interactions are as strong as electromagnetic interactions, dark nuclei could rapidly transform to beta stable nuclei containing neutrons: this is also a testable prediction. Also dark strong interactions would proceed rather fast and the dark nuclei at magnetic flux tubes could be stable in the final state. If dark stability means same as the ordinary stability then also the isotope shifted nuclei would be stable. There is evidence that this is the case.
Neither "CF" nor "LENR" is appropriate term for TGD inspired option. One would not have ordinary nuclear reactions: nuclei would be created as dark proton sequences and the nuclear physics involved is in considerably smaller energy scale than usually. This mechanism could allow at least the generation of nuclei heavier than Fe not possible inside stars and supernova explosions would not be needed to achieve this. The observation that transmuted nuclei are observed in four bands for nuclear charge Z irrespective of the catalyst used suggest that catalyst itself does not determined the outcome.

One can of course wonder whether even "transmutation" is an appropriate term now. Dark nucleosynthesis, which could in fact be the mechanism of also ordinary nucleosynthesis outside stellar interiors explain how elements heavier than iron are produced, might be more appropriate term.

See the chapter Cold fusion again or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?

Three books about cold fusion/LENR

Steven Krivit has written three books or one book in three parts - as you wish - about cold fusion (shortly CF in the sequel) - or low energy nuclear reaction (LENR) - which is the prevailing term nowadays and preferred by Krivit. The term "cold fusion" can be defended only by historical reasons: the process cannot be cold fusion. LENR relies on Widom-Larsen model (WL) trying to explain the observations using only the existing nuclear and weak interaction physics. Whether LENR is here to stay is still an open question. TGD suggests that even this interpretation is not appropriate: the nuclear physics involved would be dark and associated with heff=n× h phases of ordinary matter having identification as dark matter. Even the term "nuclear transmutation" would be challenged in TGD framework and "dark nuclear synthesis" looks a more appropriate term.

The books were a very pleasant surprise for many reasons, and I have been able to develop my own earlier overall view by adding important details and missing pieces and allowing to understand the relationship to Widom-Larsen model (WL).

1. What the books are about?

There are three books.

  1. "Hacking the atom: Explorations in Nuclear Research, vol I" considers the developments between 1990-2006. The first key theme is the tension between two competing interpretations. On one hand, the interpretation as CF involving necessarily new physics besides ordinary nuclear fusion and plagued by a direct contradiction with the expected signatures of fusion processes, in particular those of D+D→ 4He. On the other hand, the interpretation as LENR in the framework of WL in which no new physics is assumed and neutrons and weak interactions are in a key role.

    Second key theme is the tension between two competing research strategies.

    1. The first strategy tried to demonstrate convincingly that heat is produced in the process - commercial applications was the basic goal. This led to many premature declarations about solution of energy problems within few years and provided excellent weapons for the academic world opposing cold fusion on basis of textbook wisdom.
    2. Second strategy studied the reaction products and demonstrated convincingly that nuclear transmutations (isotopic shifts) took place. This aspect did not receive attention in public and the attempts to ridiculize have directed attention to the first approach and to the use of the term "cold fusion".
    According to Krivit, CF era ended around 2006, when Widom and Larsen proposed their model in which LENR would be the mechanism (see this). Widom-Larsen model (WL) can be however criticized for some un-natural looking assumptions: electron is required to have renormalized mass considerably higher than the real mass; the neutrons initiating nuclear reactions are assumed to have ultralow energies below thermal energy of target nuclei. This requires electron mass to be larger but extremely near to neutron-proton mass difference (see this, this, this, and this). The gamma rays produced in the process are assumed to transform to infrared radiation.

    To my view, WL is not the end of the story. New physics is required. For instance, the work of professor Holmlid and his team (see this and this) has provided new fascinating insights to what might be the mechanism of what has been called nuclear transmutations.

  2. Fusion Fiasco: Explorations in Nuclear Research, vol II" discusses the developments during 1989 when cold fusion was discovered by Fleischman and Pons (see this) and interpreted as CF. It soon turned out that the interpretation has deep problems and CF got the label of pseudoscience.
  3. Lost History: Explorations in Nuclear Research, vol III" tells about surprisingly similar sequence of discoveries, which has been cleaned away from history books of science because it did not fit with the emerging view about nuclear physics and condensed matter physics as completely separate disciplines. Although I had seen some remarks about this era I had not not become aware what really happened. It seems that discoveries can be accepted only when the time is mature for them, and it is far from clear whether the time is ripe even now.
What I say in the sequel necessarily reflects my limitations as a dilettante in the field of LENR/CF. My interest on the topic has lasted for about two decades and comes from different sources: LENR/CF is an attractive application for the unification of fundamental interactions that I have developed for four decades now. This unification predicts a lot of new physics - not only in Planck length scale but in all length scales - and it is of course fascinating to try to understand LENR/CF in this framework.

For instance, while reading the book, I realized that my own references to the literature have been somewhat random and not always appropriate. I do not have any systematic overall view about what has been done in the field: here the book makes wonderful service. It was a real surprise to find that first evidence for transmutation/isotope shifts emerged already for about century ago and also how soon isotope shifts were re-discovered after Pons-Fleischman discovery. The insistence on D+D→ 4He fusion model remains for an outsider as mysterious as the refusal of mainstream nuclear physicists to consider the possibility of new nuclear physics. One new valuable bit of information was the evidence that it is the cathode material that transforms to the isotope shifted nuclei: this helped to develop my own model in more detail.

Remark: A comment concerning the terminology. I agree with the author that cold fusion is not a precise or even correct term. I have myself taken CF as nothing more than a letter sequence and defended this practice to myself as a historical convention. My conviction is that the phenomenon in question is not a nuclear fusion but I am not at all convinced that it is LENR either. Dark nucleosynthesis is my won proposal.

What did I learn from the books?

Needless to say, the books are extremely interesting, for both layman and scientist - say physicist or chemist. The books provide a very thorough view about the history of the subject. There is also an extensive list of references to the literature. Since I am not an experimentalist and feel myself a dilettante in this field as a theoretician, I am unable to check the correctness and reliability of the data represented. In any case, the overall view is consistent with what I have learned about the situation during years. My opinion about WL is however different.

I have been working with ideas related to CF/LENR (or nuclear transmutations) but found books provided also completely new information and I became aware about some new critical points.

I have had a rather imbalanced view about transmutations/isotopic shifts and it was a surprise to see that they were discovered already 1989 when Fleisch and Pons published their work. Even more, the premature discovery of transmutations for century ago (1910-1930) interpreted by Darwin as a collective effect, was new to me. Articles about transmutations were published in prestigious journals like Nature and Naturwissenschaften. The written history is however history of winners and all traces of this episode disappeared from the history books of physics after the standard model of nuclear physics assuming that nuclear physics and condensed matter physics are totally isolated disciplines. The developments after the establishment of standard model relying on GUT paradigm looks to me surprisingly similar.

Sternglass - still a graduate student - wrote around 1947 to Einstein about his preliminary ideas concerning the possibility to transform protons to neutrons in strong electric fields. It became as a surprise to Sternglass that Einstein supported his ideas. I must say that this increased my respect of Einstein even further. Einstein's physical intuition was marvellous. In 1951 Sternglass found that in strong voltages in keV range protons could be transformed to neutrons with unexpectedly high rate. This is strange since the process is kinematically impossible for free protons: it however can be seen as support for WL model.

Also scientists are humans with their human weaknesses and strengths and the history of CF/LENR is full of examples of both light and dark sides of human nature. Researchers are fighting for funding and the successful production of energy was also the dream of many people involved. There were also people, who saw CF/LENR as a quick manner to become millionaire. Getting a glimpse about this dark side was rewarding. The author knows most of the influential people, who have worked in the field and this gives special authenticity to the books.

It was a great service for the reader the basic view about what happened was stated clearly in the introduction. I noticed also that with some background one can pick up any section and start to read: this is a service for a reader like me. I would have perhaps divided the material into separate parts but probably your less bureaucratic choice leaving room for surprise is better after all.

Who should read these books? The books would be a treasure for any physicist ready to challenge the prevailing prejudices and learn about what science is as seen from the kitchen side. Probably this period will be seen in future as very much analogous to the period leading to the birth of atomic physics and quantum theory. Also layman could enjoy reading the books, especially the stories about the people involved - both scientists and those funding the research and academic power holders - are fascinating. The history of cold fusion is a drama in which one can see as fight between Good and Evil and eventually realize that also Good can divide into Good and Evil. This story teaches about a lot about the role of egos in all branches of sciences and in all human activities. Highly rationally behaving science professionals can suddenly start to behave completely irrationally when their egos feel being under threat.

My hope is that the books could wake up the mainstream colleague to finally realize that CF/LENR or - whatever you wish to call it - is not pseudoscience. Most workers in the field are highly competent, intellectually honest, an have had so deep passion for understanding Nature that they have been ready to suffer all the humiliations that the academic hegemony can offer for dissidents. The results about nuclear transmutations are genuine and pose a strong challenge for the existing physics, and to my opinion force to give up the naive reductionistic paradigm. People building unified theories of physics should be keenly aware of these phenomena challenging the reductionistic paradigm even at the level of nuclear and condensed matter physics.

2. The problems of WL

For me the first book representing the state of CF/LENR as it was around 2004 was the most interesting. In his first book Krivit sees 1990-2004 period as a gradual transition from the cold fusion paradigm to the realization that nuclear transmutations occur and the fusion model does not explain this process.

The basic assumption of the simplest fusion model was that the fusion D+D → 4He explains the production of heat. This excluded the possibility that the phenomenon could take place also in light water with deuterium replaced with hydrogen. It however turned out that also ordinary water allows the process. The basic difficulty is of course Coulomb wall but the model has also difficulties with the reaction signatures and the production rate of 4He is too low to explain heat production. Furthermore, gamma rays accompanying 4He production were not observed. The occurrence of transmutations is a further problem. Production of Li was observed already in 1989, and later russia trio Kucherov, Savvatinova, Karabut detected tritium, 4He, and of heavy elements. They also observed modifications at the surface of the cathode down to depth of .1-1 micrometers.

Krivit sees LENR as a more realistic approach to the phenomena involved. In LENR Widom-Larsen model (WL) is the starting point. This would involve no new nuclear physics. I also see WL as a natural starting point but I am skeptic about understanding CF/LENR in term of existing physics. Some new physics seems to be required and I have been doing intense propaganda for a particular kind of new physics colfusion again (see this).

WL assumes that weak process proton (p) → neutron (n) occurring via e+ p→ n+ν (e denotes electron and ν for neutrino) is the key step in cold fusion. After this step neutron finds its way to nucleus easily and the process continues in conventional sense as analog of r-process assumed to give rise to elements heavier than iron in supernova explosions and leads to the observed nuclear transmutations. Essentially one proton is added in each step decomposing to four sub-steps involving beta decay n→ p and its reversal.

There are however problems.

  1. Already the observations of Sternglass suggest that e+ p→ n+ν occurs. e+ p→ n+ν is however kinematically impossible for free particles. e should have considerably higher effective mass perhaps caused by collective many-body effects. e+ p→ n+ν could occur in the negatively charged surface layer of cathode provided the sum of the rest masses of e and p is larger than that of n. This requires rather large renormalization of electron mass claimed to be due to the presence of strong electric fields. Whether there really exists a mechanism increasing the effective mass of electron, is far from obvious and strong nuclear electric fields are proposed to cause this.
  2. Second problematic aspect of WL is the extreme slowness of the rate of beta decay transforming proton to neutron. For ultraslow neutrons the cross section for the absorption of neutron to nucleus increases as 1/vrel, vrel the relative velocity, and in principle could compensate the extreme slowness of the weak decays. The proposal is that neutrons are ultraslow. This is satisfied if the sum of rest masses is only slightly larger than proton mass. One would have mE≈ mn-mp Δ En, where Δ En is the kinetic of neutron. To obtain correct order of magnitude for the rate of neutron absorptions Δ En should be indeed extremely small. One should have Δ E=10-12 eV and one has Δ E/mp= 10-21! This requires fine tuning and it is difficult to believe that the electric field causing the renormalization could be so precisely fine-tuned.

    Δ E corresponds to extremely low temperature about 10-8 K hard to imagine this at room temperature. Thermal energy of the target nucleus at room temperature is of the order 10-11Amp, A mass number. Hence it would seem that the thermal motion of the target nuclei mask the effect.

  3. One should also understand why gamma rays emitted in the ordinary nuclear interactions after neutron absorption are not detected. The proposal is that gamma rays somehow transform to infrared photons, which would cause the heating. This would be a collective effect involving quantum entanglement of electrons. One might hope that by quantum coherence the neutron absorption rate could be proportional to N2 instead of N, where N is the number of nuclei involved. This looks logical but I am not convinced about the physical realizability of this proposal.
To my opinion these objections are really serious.

See the chapter Cold fusion again "Hyper-finite Factors and Dark Matter Hierarchy" or the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?

How to demonstrate quantum superposition of classical gravitational fields?

There was rather interesting article in Nature (see this) by Marletto and Vedral about the possibility of demonstrating the quantum nature of gravitational fields by using weak measurement of classical gravitational field affecting it only very weakly. There is also an article in arXiv by the same authors (see this). The approach relies on quantum information theory.

The gravitational field would serve as a measurement interaction and the weak measurements would be applied to gravitational witness serving as probe - the technical term is ancilla. Authors claim that weak measurements giving rise to analog of Zeno effect could be used to test whether the quantum superposition of classical gravitational fields (QSGR) does take place. One can however argue that the extreme weakness of gravitation implies that other interactions and thermal perturbations mask it completely in standard physics framework. Also the decoherence of gravitational quantum states could be argued to make the test impossible.

One must however take these objections with a big grain of salt. After all, we do not have a theory of quantum gravity and all assumptions made about quantum gravity might not be correct. For instance, the vision about reduction to Planck length scale might be wrong. There is also the mystery of dark matter, which might force considerable motivation of the views about dark matter. Furthermore, General Relativity itself has conceptual problems: in particular, the classical conservation laws playing crucial role in quantum field theories are lost. Superstrings were a promising candidate for a quantum theory of gravitation but failed as a physical theory.

In TGD, which was born as an attempt to solve the energy problem of TGD and soon extended to a theory unifying gravitation and standard model interactions and also generalizing string models, the situation might however change. In zero energy ontology (ZEO) the sequence of weak measurements is more or less equivalent to the existence of self identified as generalized Zeno effect! The value of heff/h=n characterizes the flux tubes mediating various interactions and can be very large for gravitational flux tubes (proportional to GMm/v0, where v0<c has dimensions of velocity, and M and m are masses at the ends of the flux tube) with Mm> v0mPl2 (mPl denotes Planck mass) at their ends. This means long coherence time characterized in terms of the scale of causal diamond (CD). The lifetime T of self is proportional to heff so that for gravitational self T is very long as compared to that for electromagnetic self. Selves could correspond sub-selves of self identifiable as sensory mental images so that sensory perception would correspond to weak measurements and for gravitation the times would be long: we indeed feel the gravitational force all the time. Consciousness and life would provide a basic proof for the QSGR (note that large neutron has mass of order Planck mass!).

See the article How to demonstrate quantum superposition of classical gravitational fields? or the chapter Quantum criticality and dark matter.

Anomalous neutron production from an arc current in gaseous hydrogen

I learned about nuclear physics anomaly new to me (actually the anomaly is 64 years old) from an article of Norman and Dunning-Davies in Research Gate (see this). Neutrons are produced from an arc current in hydrogen gas with a rate exceeding dramatically the rate predicted by the standard model of electroweak interactions, in which the production should occur through e-+p→ n+ν by weak boson exchange. The low electron energies make the process also kinematically impossible. Additional strange finding due to Borghi and Santilli is that the neutron production can in some cases be delayed by several hours. Furthermore, according to Santilli neutron production occurs only for hydrogen but not for heavier nuclei.

In the following I sum up the history of the anomaly following closely the representation of Norman and Dunning-Davies (see this): this article gives references and details and is strongly recommended. This includes the pioneering work of Sternglass in 1951, the experiments of Don Carlo Borghi in the late 1960s, and the rather recent experiments of Ruggiero Santilli (see this).

The pioneering experiment of Sternglass

The initial anomalously large production of neutrons using an current arc in hydrogen gas was performed by Earnest Sternglass in 1951 while completing his Ph.D. thesis at Cornell. He wrote to Einstein about his inexplicable results, which seemed to occur in conditions lacking sufficient energy to synthesize the neutrons that his experiments had indeed somehow apparently created. Although Einstein firmly advised that the results must be published even though they apparently contradicted standard theory, Sternglass refused due to the stultifying preponderance of contrary opinion and so his results were preemptively excluded under orthodox pressure within discipline leaving them unpublished. Edward Trounson, a physicist working at the Naval Ordnance Laboratory repeated the experiment and again gained successful results but they too, were not published.

One cannot avoid the question, what physics would look like today, if Sternglass had published or managed to publish his results. One must however remember that the first indications for cold fusion emerged also surprisingly early but did not receive any attention and that cold fusion researchers were for decades labelled as next to criminals. Maybe the extreme conservatism following the revolution in theoretical physics during the first decades of the previous century would have prevented his work to receive the attention that it would have deserved.

The experiments of Don Carlo Borghi

Italian priest-physicist Don Carlo Borghi in collaboration with experimentalists from the University of Recife, Brazil, claimed in the late 1960s to have achieved the laboratory synthesis of neutrons from protons and electrons. C. Borghi, C. Giori, and A. Dall'Olio published 1993 an article entitled "Experimental evidence of emission of neutrons from cold hydrogen plasma" in Yad. Fiz. 56 and Phys. At. Nucl. 56 (7).

Don Borghi's experiment was conducted via a cylindrical metallic chamber (called "klystron") filled up with a partially ionized hydrogen gas at a fraction of 1 bar pressure, traversed by an electric arc with about 500V and 10mA as well as by microwaves with 1010 Hz frequency. Note that the energies of electrons would be below .5 keV and non-relativistic. In the cylindrical exterior of the chamber the experimentalists placed various materials suitable to become radioactive when subjected to a neutron flux (such as gold, silver and others). Following exposures of the order of weeks, the experimentalists reported nuclear transmutations due to a claimed neutron flux of the order of 104 cps, apparently confirmed by beta emissions not present in the original material.

Don Borghi's claim remained un-noticed for decades due to its incompatibility with the prevailing view about weak interactions. The process e-+p→ n+ν is also forbidden by conservation of energy unless the total cm energy of proton and the electron have energy larger than Δ E= mn-mp-me=0.78 MeV. This requires highly relativistic electrons. Also the cross section for the reaction proceeding by exchange of W boson is extremely small at low energies (about 10-20 barn: barn=10-28 m2 represents the natural scale for cross section in nuclear physics). Some new physics must be involved if the effect is real. Situation is strongly reminiscent of cold fusion (or low energy nuclear reactions (LENR), which many main stream nuclear physicists still regard as a pseudoscience.

Santilli's experiments

Ruggero Santilli (see this) replicated the experiments of Don Borghi. Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. Santilli analyzes several alternative proposals explaining the anomalyn and suggests that new spin zero bound state of electron and proton with rest mass below the sum of proton and electron masses and absorbed by nuclei decaying then radioactively could explain the anomaly. The energy needed to overcome the kinematic barrier could come from the energy liberated by electric arc. The problem of the model is that it has no connection with standard model.

Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. According to Santilli: According to Santilli:

" A first series of measurements was initiated with Klystron I on July 28,2006, at 2 p.m. Following flushing of air, the klystron was filled up with commercial grale hydrogen at 25 psi pressure. We first used detector PM1703GN to verify that the background radiations were solely consisting of photon counts of 5-7 μR/h without any neutron count; we delivered a DC electric arc at 27 V and 30 A (namely with power much bigger than that of the arc used in Don Borghi's tests...), at about 0.125" gap for about 3 s; we waited for one hour until the electrodes had cooled down, and then placed detector PM1703GN against the PVC cylinder. This resulted in the detection of photons at the rate of 10 - 15 μR/hr expected from the residual excitation of the tips of the electrodes, but no neutron count at all.

However, about three hours following the test, detector PM1703GN entered into sonic and vibration alarms, specifically, for neutron detections off the instrument maximum of 99 cps at about 5' distance from the klystron while no anomalous photon emission was measured. The detector was moved outside the laboratory and the neutron counts returned to zero. The detector was then returned to the laboratory and we were surprised to see it entering again into sonic and vibrational alarms at about 5' away from the arc chamber with the neutron count off scale without appreciable detection of photons, at which point the laboratory was evacuated for safety.

After waiting for 30 minutes (double neutron's lifetime), we were surprised to see detector PMl703GN go off scale again in neutron counts at a distance of 10' from the experimental set up, and the laboratory was closed for the day."

TGD based model

The basic problems to be solved are following.

  1. What is the role of current arc and other triggering impulses (such as microwave radiation or pressure surge mentioned by Santilli): do they provide energy or do they have some other role?
  2. Neutron production is kinematically impossible if weak interactions mediate it. Even if kinematically possible, weak interaction rates are quite too slow. The creation of intermediate states via other than weak interactions would solve both problems. If weak interactions are involved with the creation of the intermediate states, how there rates can be so high?
  3. What causes the strange delays in the production in some cases but now always? Why hydrogen gas is preferred?
The effect brings strongly in mind cold fusion for which TGD proposes a model (see this) in terms of generation of dark nuclei with non-standard value heff=n× h of Planck constant formed from dark proton sequences at flux tubes. The binding energy for these states is supposed to be much lower than for the ordinary nuclei and eventually these nuclei would decay to ordinary nuclei in collisions with metallic targets attracting positively charged magnetic flux tubes. The energy liberated would be of the essentially the ordinary nuclear binding energy. Note that the creation of dark proton sequences does not require weak interactions so that the basic objections are circumvented.

TGD explanation (see this) could be the same for Tesla's findings, for cold fusion (see this), Pollack effect (see this) and for the anomalous production of neutrons. Even electrolysis would involve in an essential manner Pollack effect and new physics.

Could this model explain the anomalous neuron production and its strange features?

  1. Why electric arc, pressure surge, or microwave radiation would be needed? Dark phases are formed at quantum criticality (see this) and give rise to the long range correlations via quantum entanglement made possible by large heff=n× h. The presence of electron arc occurring as di-electric breakdown is indeed a critical phenomenon.

    Already Tesla discovered strange phenomena in his studies of arc discharges but his discoveries were forgotten by mainstream. TGD explanation (see this) could be the same for Tesla's findings, for cold fusion (see this), Pollack effect (see this) and for the anomalous production of neutrons. Even electrolysis would involve in an essential manner Pollack effect and new physics.

    Also energy feed might be involved. Quite generally, in TGD inspired quantum biology generation of dark states requires energy feed and the role of metabolic energy is to excite dark states. For instance, dark atoms have smaller binding energy and the energies of cyclotron states increase with heff/h. For instance, part of microwave photons could be dark and have much higher energy than otherwise.

    Could the production of dark proton sequences at magnetic flux tubes be all that is needed so that the possible dark variant of the reaction e-+p→ n+ν would not be needed at all?

  2. If also weak bosons appear as dark variants, their Compton length is scaled up accordingly and in scales shorter than the Compton length, they behave effectively as massless particles and weak interactions would become as strong as electromagnetic interactions. This would make possible the decay of dark proton sequences at magnetic flux tubes to beta stable dark isotopes via p→ n+e++ν. Neutrons would be produced in the decays of the dark nuclei to ordinary nuclei liberating nuclear binding energy. Note however that TGD allows also to consider p-adically scaled variants of weak bosons with much smaller mass scale possible important in biology, and one cannot exclude them from consideration.
  3. The reaction e-+p→ n+ν is not necessary in the model. One can however ask, whether there could exist a mechanism making the dark reaction e-+p→ n+ν kinematically possible. If the scale of dark nuclear binding energy is strongly reduced, also p→ n+e++ν in dark nuclei would become kinematically impossible (in ordinary nuclei nuclear binding energy makes n effectively lighter than p).

    TGD based model for nuclei as strings of nucleons (see this and this) connected by neutral or charged (possibly colored) mesonlike bonds with quark and antiquark at its ends could resolve this problem. One could have exotic nuclei in which proton plus negatively charged bond could effectively behave like neutron. Dark weak interactions would take place for neutral bonds between protons and reduce the charge of the bond from q=0 to q= -1 and transform p to effective n. This was assumed also in the model of dark nuclei and also in the model of ordinary nuclei and predicts large number of exotic states. One can of course ask, whether the nuclear neutrons are actually pairs of proton and negatively charged bond.

  4. What about the delays in neutron production occurring in some cases? Why not always? In the situations, when there is a delay in neutron production, the dark nuclei could have rotated around magnetic flux tubes of the magnetic body (MB) of the system before entering to the metal target, one would have a delayed production.
  5. Why would hydrogen be preferred? Why for instance, deuteron and heavier isotopes containing neutrons would not form dark proton sequences at magnetic flux tubes. Why would be the probability for the transformation of say D=pn to its dark variant be very small?

    If the binding energy of dark nuclei per nucleon is several orders of magnitude smaller than for ordinary nuclei, the explanation is obvious. The ordinary nuclear binding energy is much higher than the dark binding energy so that only the sequences of dark protons can form dark nuclei. The first guess (see this) was that the binding energy is analogous to Coulomb energy and thus inversely proportional to the size scale of dark nucleus scaling like h/heff. One can however ask why D with ordinary size could not serve as sub-unit.

For details see the chapter Cold Fusion Again or the article Anomalous neutron production from an arc current in gaseous hydrogen.

Non-local production of photon pairs as support for heff/h=n hypothesis

Again a new anomaly! Photon pairs have been created by a new mechanism. Photons emerge at different points! See this.

Could this give support for the TGD based general model for elementary particle as a string like object (flux tube) with first end (wormhole contact) carrying the quantum numbers - in the case of gauge boson fermion and antifermion at opposite throats of the contact. Second end would carry neutrino-right-handed neutrino pair neutralizing the possible weak isospin. This would give only local decays. Also emissions of photons from charged particle would be local.

Could the bosonic particle be a mixture of two states. For the first state flux tube would have fermion and antifermion at the same end of the fluxtube: only local decays. For the second state fermion and antifermion would reside at the ends of the flux tubes residing at throats associated with different wormhole contacts. This state in state would give rise to non-local two-photon emissions. Mesons of hadron physics would correspond to this kind of states and in old-fashioned hadron physics one speaks about photon-vector meson mixing in the description of the photon-hadron interactions.

If the Planck constant heff/h=n of the emitting particle is large, the distance between photon emissions would be long. The non-local days could make the visible both exotic decay and allow to deduce the value of n! This would how require the transformation of emitted dark photon to ordinary (same would happen when dark photons transform to biophotons).

Can one say anything about the length of fux tube? Magnetic flux tube contains fermionic string. The length of this string is of order Compton length and of the order of p-adic length scale.

What about photon itself - could it have non-local fermion-antifermion decays based on the same mechanism? What the length of photonic string is is not clear. Photon is massless, no scales! One identification of length would be as wavelength defining also the p-adic length scale.

To sum up: the nonlocal decays and emissions could lend strong support for both flux tube identification of particles and for hierarchy of Planck constants. It might be possible to even measure the value of n associated with quantum critical state by detecting decays of this kind.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

For details see the chapter Quantum criticality and dark matter.

Hierarchy of Planck constants, space-time surfaces as covering spaces, and adelic physics

From the beginning it was clear that heff/h=n corresponds to the number of sheets for a covering space of some kind. First the covering was assigned with the causal diamonds. Later I assigned it with space-time surfaces but the details of the covering remained unclear. The final identification emerged only in the beginning of 2017.

Number theoretical universality (NTU) leads to the notion of adelic space-time surface (monadic manifold) involving a discretization in an extension of rationals defining particular level in the hierarchy of adeles defining evolutionary hierarchy. The first formulation was proposed here and more elegant formulation here.

The key constraint is NTU for adelic space-time containing sheets in the real sector and various padic sectors, which are extensions of p-adic number fields induced by an extension of rationals which can contain also powers of a root of e inducing finite-D extension of p-adic numbers (ep is ordinary p-adic number in Qp).

One identifies the numbers in the extension of rationals as common for all number fields and demands that imbedding space has a discretization in an extension of rationals in the sense that the preferred coordinates of imbedding space implied by isometries belong to extension of rationals for the points of number theoretic discretization. This implies that the versions of isometries with group parameters in the extension of rationals act as discrete versions of symmetries. The correspondence between real and p-adic variants of the imbedding space is extremely discontinuous for given adelic imbedding space (there is hierarchy of them with levels characterized by extensions of rationals). Space-time surfaces typically contain rather small set of points in the extension (xn+yn2=zn contains no rationals for n>2!). Hence one expects a discretization with a finite cutoff length at space-time level for sufficiently low space-time dimension D=4 could be enough.

After that one assigns in the real sector an open set to each point of discretization and these open sets define a manifold covering. In p-adic sector one can assign 8:th Cartesian power of ordinary p-adic numbers to each point of number theoretic discretization. This gives both discretization and smooth local manifold structure. What is important is that Galois group of the extension acts on these discretizations and one obtains from a given discretization a covering space with the number of sheets equal to a factor of the order of Galois group, typically equal to the order of Galois.

heff/h=n was identified from the beginning as the dimension of poly-sheeted covering assignable to space-time surface. The number n of sheets would naturally a factor of the order of Galois group implying that heff/h=n is bound to increase during number theoretic evolution so that the algebraic complexity increases. Note that WCW decomposes into sectors corresponding to the extensions of rationals and the dimension of the extension is bound to increase in the long run by localizations to various sectors in self measurements (see this). Dark matter hierarchy represents number theoretical/adelic physics and therefore has now rather rigorous mathematical justification. It is however good to recall that heff/h=n hypothesis emerged from an experimental anomaly: radiation at ELF frequencies had quantal effects of vertebrate brain impossible in standard quantum theory since the energies E=hf of photons are ridiculously small as compared to thermal energy.

Indeed, since n is positive integer evolution is analogous to a diffusion in half-line and n unavoidably increases in the long run just as the particle diffuses farther away from origin (by looking what gradually happens near paper basket one understands what this means). The increase of n implies the increase of maximal negentropy and thus of negentropy. Negentropy Maximization Principle (NMP) follows from adelic physics alone and there is no need to postulate it separately. Things get better in the long run although we do not live in the best possible world as Leibniz who first proposed the notion of monad proposed!

For details see the chapter Quantum criticality and dark matter.

Time crystals, macroscopic quantum coherence, and adelic physics

Time crystals were (see this) were proposed by Frank Wilzek in 2012. The idea is that there is a periodic collective motion so that one can see the system as analog of 3-D crystal with time appearing as fourth lattice dimension. One can learn more about real life time crystals here.

The first crystal was created by Moore et al (see this) and involved magnetization. By adding a periodic driving force it was possible to generate spin flips inducing collective spin flip as a kind of domino effect. The surprise was that the period was twice the original period and small changes of the driving frequency did not affect the period. One had something more than forced oscillation - a genuine time crystal. The period of the driving force - Floquet period- was 74-75 μs and the system is measured for N=100 Floquet periods or about 7.4-7.5 milliseconds (1 ms happens to be of same order of magnitude as the duration of nerve pulse). I failed to find a comment about the size of the system. With quantum biological intuition I would guess something like the size of large neuron: about 100 micrometers.

Second law does not favor time crystals. The time in which single particle motions are thermalized is expected to be rather short. In the case of condensed matter systems the time scale would not be much larger than that for a typical rate of typical atomic transition. The rate for 2P → 1S transition of hydrogen atom estimated here gives a general idea. The decay rate is proportional to ω3d2, where ω= Δ E/hbar is the frequency difference corresponding to the energy difference between the states, d is dipole moment proportional to α a0, a0 Bohr radius and α∼ 1/137 fine structure constant. Average lifetime as inverse of the decay rate would be 1.6 ns and is expected to give a general order of magnitude estimate.

The proposal is that the systems in question emerge in non-equilibrium thermodynamics, which indeed predicts a master-slave hierarchy of time and length scales with masters providing the slowly changing background in which slaves are forced to move. I am not a specialist enough to express any strong opinions about thermodynamical explanation.

What does TGD say about the situation?

  1. So called Anderson localization (see this) is believed to accompany time crystal. In TGD framework this translates to the fusion of 3-surfaces corresponding to particles to single large 3-surface consisting of particle 3-surfaces glued together by magnetic flux tubes. On can say that a relative localization of particles occurs and they more or less lose the relative translation degrees of freedom. This effect occurs always when bound states are formed and would happen already for hydrogen atom.

    TGD vision would actually solve a fundamental problem of QED caused by the assumption that proton and electron behave as independent point like particles: QED predicts a lot of non-existing quantum states since Bethe-Salpeter equation assumes degrees of freedom, which do not actually exist. Single particle descriptions (Schrödinger equation and Dirac equation) treating proton and electron effectively as single particle geometrically (rather than independent particles) having reduced mass gives excellent description whereas QED, which was thought to be something more precise, fails. Quite generally, bound states are not properly understood in QFTs. Color confinement problem is second example about this: usually it is believed that the failure is solely due to the fact that color interaction is strong but the real reason might be much deeper.

  2. In TGD Universe time crystals would be many-particle systems having collection of 3-surfaces connected by magnetic flux tubes (tensor network in terms of condensed matter complexity theory). Magnetic flux tubes would carry dark matter in TGD sense having heff/h=n increasing the quantal scales - both spatial and temporal - so that one could have time crystals in long scales.

    Biology could provide basic examples. For instance, EEG resonance frequency could be associated with time crystals assignable to the magnetic body of brain carrying dark matter with large heff/h=n - so large that dark photon energy E=hefff would correspond to an energy above thermal energy. If bio-photons result from phase transitions heff/h=n→ 1, the energy would be in visible-UV energy range. These frequencies would in turn drive the visible matter in brain and force it to oscillate coherently.

  3. The time crystals claimed by Monroe and Lurkin to be created in laboratory demand a feed of energy (see this) unlike the time crystals proposed by Wilzek. The finding is consistent with the TGD based model. In TGD the generation of large heff phase demands energy. The reason is that the energies of states increase with heff. For instance, atomic binding energies decrease as 1/h2eff. In quantum biology this requires feeding of metabolic energy. Also now interpretation would be analogous to this.
  4. Standard physics view would rely in non-equilibrium thermodynamics whereas TGD view about time crystals would rely on dark matter and hierarchy of Planck constants in turn implied by adelic physics suggested to provide a coherent description fusing real physics as physics of matter and various p-adic physics as physics of cognition.

    Number theoretical universality (NTU) leads to the notion of adelic space-time surface (monadic manifold) involving a discretization in an extension of rationals defining particular level in the hierarchy of adeles defining evolutionary hierarchy. heff/h=n has been identified from the beginning as the dimension of poly-sheeted covering assignable to space-time surface. The action of the Galois group of extensions indeed gives rise to covering space. The number n of sheets would the order of Galois group implying heff/h=n, which is bound to increase during evolution so that the complexity increases.

    Indeed, since n is positive integer evolution is analogous to a diffusion in half-line and n unavoidably increases in the long run just as the particle diffuses farther away from origin (by looking what gradually happens near paper basket one understands what this means). The increase of n implies the increase of maximal negentropy and thus of negentropy. Negentropy Maximization Principle (NMP) follows from adelic physics alone and there is no need to postulate it separately. Things get better in the long run although we do not live in the best possible world as Leibniz who first proposed the notion of monad proposed!

For details see the chapter Quantum criticality and dark matter.

Why metabolism and what happens in bio-catalysis?

TGD view about dark matter gives also a strong grasp to metabolism and bio-catalysis - the key elements of biology.

Why metabolic energy is needed?

The simplest and at the same time most difficult question that innocent student can make about biology class is simple: "Why we must eat?". Or using more physics oriented language: "Why we must get metabolic energy?". The answer of the teacher might be that we do not eat to get energy but to get order. The stuff that we eat contains ordered energy: we eat order. But order in standard physics is lack of entropy, lack of disorder. Student could get nosy and argue that excretion produces the same outcome as eating but is not enough to survive.

We could go to a deeper level and ask why metabolic energy is needed in biochemistry. Suppose we do this in TGD Universe with dark matter identified as phases characterized by heff/h=n.

  1. Why metabolic energy would be needed? Intuitive answer is that evolution requires it and that evolution corresponds to the increase of n=heff/h. To see the answer to the question, notice that the energy scale for the bound states of an atom is proportional to 1/h2 and for dark atom to 1/heff2 ∝ n2 (do not confuse this n with the integer n labelling the states of hydrogen atom!).
  2. Dark atoms have smaller binding energies and their creation by a phase transition increasing the value of n demands a feed of energy - metabolic energy! If the metabolic energy feed stops, n is gradually reduced. System gets tired, loses consciousness, and eventually dies.

    What is remarkable that the scale of atomic binding energies decreases with n only in dimension D=3. In other dimensions it increases and in D=4 one cannot even speak of bound states! This can be easily found by a study of Schrödinger equation for the analog of hydrogen atom in various dimensions. Life based on metabolism seems to make sense only in spatial dimension D=3. Note however that there are also other quantum states than atomic states with different dependence of energy on heff.

Conditions on bio-catalysis

Bio-catalysis is key mechanism of biology and its extreme efficacy remains to be understood. Enzymes are proteins and ribozymes RNA sequences acting as biocatalysts.

What does catalysis demand?

  1. Catalyst and reactants must find each other. How this could happen is very difficult to understand in standard biochemistry in which living matter is seen as soup of biomolecules. I have already already considered the mechanisms making it possible for the reactants to find each other. For instance, in the translation of mRNA to protein tRNA molecules must find their way to mRNA at ribosome. The proposal is that reconnection allowing U-shaped magnetic flux tubes to reconnect to a pair of flux tube connecting mRNA and tRNA molecule and reduction of the value of heff=n× h inducing reduction of the length of magnetic flux tube takes care of this step. This applies also to DNA transcription and DNA replication and bio-chemical reactions in general.
  2. Catalyst must provide energy for the reactants (their number is typically two) to overcome the potential wall making the reaction rate very slow for energies around thermal energy. The TGD based model for the hydrino atom having larger binding energy than hydrogen atom claimed by Randell Mills suggests a solution. Some hydrogen atom in catalyst goes from (dark) hydrogen atom state to hydrino state (state with smaller heff/h and liberates the excess binding energy kicking the either reactant over the potential wall so that reaction can process. After the reaction the catalyst returns to the normal state and absorbs the binding energy.
  3. In the reaction volume catalyst and reactants must be guided to correct places. The simplest model of catalysis relies on lock-and-key mechanism. The generalized Chladni mechanism forcing the reactants to a two-dimensional closed nodal surface is a natural candidate to consider. There are also additional conditions. For instance, the reactants must have correct orientation. For instance, the reactants must have correct orientation and this could be forced by the interaction with the em field of ME involved with Chladni mechanism.
  4. One must have also a coherence of chemical reactions meaning that the reaction can occur in a large volume - say in different cell interiors - simultaneously. Here MB would induce the coherence by using MEs. Chladni mechanism might explain this if there is there is interference of forces caused by periodic standing waves themselves represented as pairs of MEs.
Phase transition reducing the value of heff/h=n as a basic step in bio-catalysis

Hydrogen atom allows also large heff/h=n variants with n>6 with the scale of energy spectrum behaving as (6/n)2 if the n=4 holds true for visible matter. The reduction of n as the flux tube contracts would reduce n and liberate binding energy, which could be used to promote the catalysis.

The notion of high energy phosphate bond is somewhat mysterious concept. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the a state with a smaller value of heff/h and liberate the excess binding energy? Could the phosphorylation of acceptor molecule transfer this dark atom associated with the phosphate of ATP to the acceptor molecule? Could the mysterious high energy phosphate bond correspond to the dark atom state. Metabolic energy would be needed to transform ADP to ATP and would generate dark atom.

Could solar light kick atoms into dark states and in this manner store metabolic energy? Could nutrients carry these dark atoms? Could this energy be liberated as the dark atoms return to ordinary states and be used to drive protons against potential gradient through ATP synthase analogous to a turbine of a power plant transforming ADP to ATP and reproducing the dark atom and thus the "high energy phosphate bond" in ATP? Can one see metabolism as transfer of dark atoms? Could possible negentropic entanglement disappear and emerge again after ADP→ATP.

Here it is essential that the energies of the hydrogen atom depend on hbareff=n× h in as hbareffm, m=-2<0. Hydrogen atoms in dimension D have Coulomb potential behaving as 1/rD-2 from Gauss law and the Schrödinger equation predicts for D≠ 4 that the energies satisfy En∝ (heff/h)m, m=2+4/(D-4). For D=4 the formula breaks since in this case the dependence on hbar is not given by power law. m is negative only for D=3 and one has m=-2. There D=3 would be unique dimension in allowing the hydrino-like states making possible bio-catalysis and life in the proposed scenario.

It is also essential that the flux tubes are radial flux tubes in the Coulomb field of charged particle. This makes sense in many-sheeted space-time: electrons would be associated with a pair formed by flux tube and 3-D atom so that only part of electric flux would interact with the electron touching both space-time sheets. This would give the analog of Schrödinger equation in Coulomb potential restricted to the interior of the flux tube. The dimensional analysis for the 1-D Schrödinger equation with Coulomb potential would give also in this case 1/n2 dependence. Same applies to states localized to 2-D sheets with charged ion in the center. This kind of states bring in mind Rydberg states of ordinary atom with large value of n.

The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane.

For details see the chapter Quantum criticality and dark matter.

NMP and self

The preparation of an article about number theoretic aspects of TGD forced to go through various related ideas and led to a considerable integration of the ideas. In this note ideas related directly to consciousness and cognition are discussed.

  1. Adelic approach strongly suggests the reduction of NMP to number theoretic physics somewhat like second law reduces to probability theory. The dimension of extension rationals characterizing the hierarchy level of physics and defined an observable measured in state function reductions is positive and can only increase in statistical sense. Therefore the maximal value of entanglement negentropy increases as new entangling number theoretic degrees of freedom emerge. heff/h=n identifiable as factor of Galois group of extension characterizes the number of these degrees of freedom for given space-time surfaces as number of its sheets.
  2. State function reduction is hitherto assumed to correspond always to a measurement of density matrix which can be seen as a reaction of subsystem to its environment. This makes perfect sense at space-time level. Higher level measurements occur however at the level of WCW and correspond to a localization to some sector of WCW determining for instance the quantization axes of various quantum numbers. Even the measurement of heff/h=n would measure the dimension of Galois group and force a localization to an extension with Galois group with this dimension. These measurements cannot correspond to measurements of density matrix since different WCW sectors cannot entangle by WCW locality. This finding will be discuss in the following.
Evolution of NMP

The view about Negentropy Maximization Principle (NMP) has co-evolved with the notion of self and I have considered many variants of NMP.

  1. The original formulation of NMP was in positive energy ontology and made same predictions as standard quantum measurement theory. The new element was that the density matrix of sub-system defines the fundamental observable and the system goes to its eigenstate in state function reduction. As found, the localizations at to WCW sectors define what might be called self-measurements and identifiable as active volitions rather than reactions.
  2. In p-adic physics one can assign with rational and even algebraic entanglement probabilities number theoretical entanglement negentropy (NEN) satisfying the same basic axioms as the ordinary Shannon entropy but having negative values and therefore having interpretation as information. The definition of p-adic negentropy (real valued) reads as Sp= -∑ Pklog(|Pk|p), where | . |p denotes p-adic norm. The news is that Np= -Sp can be positive and is positive for rational entanglement probabilities. Real entanglement entropy S is always non-negative.

    NMP would force the generation of negentropic entanglement (NE) and stabilize it. NE resources of the Universe - one might call them Akashic records- would steadily increase.

  3. A decisive step of progress was the realization is that NTU forces all states in adelic physics to have entanglement coefficients in some extension of rationals inducing finite-D extension of p-adic numbers. The same entanglement can be characterized by real entropy S and p-adic negentropies Np, which can be positive. One can define also total p-adic negentropy: N= ∑p Np for all p and total negentropy Ntot=N-S.

    For rational entanglement probabilities it is easy to demonstrate that the generalization of adelic theorem holds true: Ntot=N-S=0. NMP based on Ntot rather than N would not say anything about rational entanglement. For extensions of rationals it is easy to find that N-S>0 is possible if entanglement probabilities are of form Xi/n with |Xi|p=1 and n integer. Should one identify the total negentropy as difference Ntot=N-S or as Ntot=N?

    Irrespective of answer, large p-adic negentropy seems to force large real entropy: this nicely correlates with the paradoxical finding that living systems tend to be entropic although one would expect just the opposite: this relates in very interesting manner to the work of biologists Jeremy England. The negentropy would be cognitive negentropy and not visible for ordinary physics.

  4. The latest step in the evolution of ideas NMP was the question whether NMP follows from number theory alone just as second law follows form probability theory! This irritates theoretician's ego but is victory for theory. The dimension n of extension is positive integer and cannot but grow in statistical sense in evolution! Since one expects that the maximal value of negentropy (define as N-S) must increase with n. Negentropy must increase in long run.
Number theoretic entanglement can be stable

Number theoretical Shannon entropy can serve as a measure for genuine information assignable to a pair of entanglement systems. Entanglement with coefficients in the extension is always negentropic if entanglement negentropy comes from p-adic sectors only. It can be negentropic if negentropy is defined as the difference of p-adic negentropy and real entropy.

The diagonalized density matrix need not belong to the algebraic extension since the probabilities defining its diagonal elements are eigenvalues of the density matrix as roots of N:th order polynomial, which in the generic case requires n-dimensional algebraic extension of rationals. One can argue that since diagonalization is not possible, also state function reduction selecting one of the eigenstates is impossible unless a phase transition increasing the dimension of algebraic extension used occurs simultaneously. This kind of NE could give rise to cognitive entanglement.

There is also a special kind of NE, which can result if one requires that density matrix serves a universal observable in state function reduction. The outcome of reduction must be an eigen space of density matrix, which is projector to this subspace acting as identity matrix inside it. This kind NE allows all unitarily related basis as eigenstate basis (unitary transformations must belong to the algebraic extension). This kind of NE could serve as a correlate for "enlightened" states of consciousness. Schrödingers cat is in this kind of state stably in superposition of dead and alive and state basis obtained by unitary rotation from this basis is equally good. One can say that there are no discriminations in this state, and this is what is claimed about "enlightened" states too.

The vision about number theoretical evolution suggests that NMP forces the generation of NE resources as NE assignable to the "passive boundary of CD for which no changes occur during sequence of state function reductions defining self. It would define the unchanging self as negentropy resources, which could be regarded as kind of Akashic records. During the next "re-incarnation after the first reduction to opposite boundary of CD the NE associated with the reduced state would serve as new Akashic records for the time reversed self. If NMP reduces to the statistical increase of heff/h=n the consciousness information contents of the Universe increases in statistical sense. In the best possible world of SNMP it would increase steadily.

Does NMP reduce to number theory?

The heretic question that emerged quite recently is whether NMP is actually needed at all! Is NMP a separate principle or could NMP reduced to mere number theory? Consider first the possibility that NMP is not needed at all as a separate principle.

  1. The value of heff/h=n should increase in the evolution by the phase transitions increasing the dimension of the extension of rationals. heff/h=n has been identified as the number of sheets of some kind of covering space. The Galois group of extension acts on number theoretic discretizations of the monadic surface and the orbit defines a covering space. Suppose n is the number of sheets of this covering and thus the dimension of the Galois group for the extension of rationals or factor of it.
  2. It has been already noticed that the "big" state function reductions giving rise to death and reincarnation of self could correspond to a measurement of n=heff implied by the measurement of the extension of the rationals defining the adeles. The statistical increase of n follows automatically and implies statistical increase of maximal entanglement negentropy. Entanglement negentropy increases in statistical sense.

    The resulting world would not be the best possible one unlike for a strong form of NMP demanding that negentropy does increaes in "big" state function reductions. n also decrease temporarily and they seem to be needed. In TGD inspired model of bio-catalysis the phase transition reducing the value of n for the magnetic flux tubes connecting reacting bio-molecules allows them to find each other in the molecular soup. This would be crucial for understanding processes like DNA replication and transcription.

  3. State function reduction corresponding to the measurement of density matrix could occur to an eigenstate/eigenspace of density matrix only if the corresponding eigenvalue and eigenstate/eigenspace is expressible using numbers in the extension of rationals defining the adele considered. In the generic case these numbers belong to N-dimensional extension of the original extension. This can make the entanglement stable with respect to state the measurements of density matrix.

    A phase transition to an extension of an extension containing these coefficients would be required to make possible reduction. A step in number theoretic evolution would occur. Also an entanglement of measured state pairs with those of measuring system in containing the extension of extension would make possible the reduction. Negentropy could be reduced but higher-D extension would provide potential for more negentropic entanglement and NMP would hold true in the statistical sense.

  4. If one has higher-D eigen space of density matrix, p-adic negentropy is largest for the entire subspace and the sum of real and p-adic negentropies vanishes for all of them. For negentropy identified as total p-adic negentropy SNMP would select the entire sub-space and NMP would indeed say something explicit about negentropy.
Or is NMP needed as a separate principle?

Hitherto I have postulated NMP as a separate principle. Strong form of NMP (SNMP) states that Negentropy does not decrease in "big" state function reductions corresponding to death and re-incarnations of self.

One can however argue that SNMP is not realistic. SNMP would force the Universe to be the best possible one, and this does not seem to be the case. Also ethically responsible free will would be very restricted since self would be forced always to do the best deed that is increase maximally the negentropy serving as information resources of the Universe. Giving up separate NMP altogether would allow to have also "Good" and "Evil".

This forces to consider what I christened weak form of NMP (WNMP). Instead of maximal dimension corresponding to N-dimensional projector self can choose also lower-dimensional sub-spaces and 1-D sub-space corresponds to the vanishing entanglement and negentropy assumed in standard quantum measurement theory. As a matter fact, this can also lead to larger negentropy gain since negentropy depends strongly on what is the large power of p in the dimension of the resulting eigen sub-space of density matrix. This could apply also to the purely number theoretical reduction of NMP.

WNMP suggests how to understand the notions of Good and Evil. Various choices in the state function reduction would correspond to Boolean algebra, which suggests an interpretation in terms of what might be called emotional intelligence . Also it turns out that one can understand how p-adic length scale hypothesis - actually its generalization - emerges from WNMP.

  1. One can start from ordinary quantum entanglement. It corresponds to a superposition of pairs of states. Second state corresponds to the internal state of the self and second state to a state of external world or biological body of self. In negentropic quantum entanglement each is replaced with a pair of sub-spaces of state spaces of self and external world. The dimension of the sub-space depends on which pair is in question. In state function reduction one of these pairs is selected and deed is done. How to make some of these deeds good and some bad? Recall that WNMP allows only the possibility to generate NNE but does not force it. WNMP would be like God allowing the possibility to do good but not forcing good deeds.

    Self can choose any sub-space of the subspace defined by k≤ N-dimensional projector and 1-D subspace corresponds to the standard quantum measurement. For k=1 the state function reduction leads to vanishing negentropy, and separation of self and the target of the action. Negentropy does not increase in this action and self is isolated from the target: kind of price for sin.

    For the maximal dimension of this sub-space the negentropy gain is maximal. This deed would be good and by the proposed criterion NE corresponds to conscious experience with positive emotional coloring. Interestingly, there are 2k-1 possible choices, which is almost the dimension of Boolean algebra consisting of k independent bits. The excluded option corresponds to 0-dimensional sub-space - empty set in set theoretic realization of Boolean algebra. This could relate directly to fermionic oscillator operators defining basis of Boolean algebra - here Fock vacuum would be the excluded state. The deed in this sense would be a choice of how loving the attention towards system of external world is.

  2. A map of different choices of k-dimensional sub-spaces to k-fermion states is suggestive. The realization of logic in terms of emotions of different degrees of positivity would be mapped to many-fermion states - perhaps zero energy states with vanishing total fermion number. State function reductions to k-dimensional spaces would be mapped to k-fermion states: quantum jumps to quantum states!

    The problem brings in mind quantum classical correspondence in quantum measurement theory. The direction of the pointer of the measurement apparatus (in very metaphorical sense) corresponds to the outcome of state function reduction, which is now 1-D subspace. For ordinary measurement the pointer has k positions. Now it must have 2k-1 positions. To the discrete space of k pointer positions one must assign fermionic Clifford algebra of second quantized fermionic oscillator operators. The hierarchy of Planck constants and dark matter suggests the realization. Replace the pointer with its space-time k-sheeted covering and consider zero energy energy states made of pairs of k-fermion states at the sheets of the n-sheeted covering? Dark matter would be therefore necessary for cognition. The role of fermions would be to "mark" the k space-time sheets in the covering.

The cautious conclusion is that NMP as a separate principle is not necessary and follows in statistical sense from the unavoidable increase of n=heff/h identified as dimension of extension of rationals define the adeles if this extension or at least the dimension of its Galois group is observable.

For details see the chapter Negentropy Maximization Principle or the article Re-examination of the basic notions of TGD inspired theory of consciousness.

WCW and the notion of intentional free will

The preparation of an article about number theoretic aspects of TGD forced to go through various related ideas and led to a considerable integration of the ideas. In this note ideas related directly to consciousness and cognition are discussed.

  1. Adelic approach strongly suggests the reduction of NMP to number theoretic physics somewhat like second law reduces to probability theory. The dimension of extension rationals characterizing the hierarchy level of physics and defined an observable measured in state function reductions is positive and can only increase in statistical sense. Therefore the maximal value of entanglement negentropy increases as new entangling number theoretic degrees of freedom emerge. heff/h=n identifiable as factor of Galois group of extension characterizes the number of these degrees of freedom for given space-time surfaces as number of its sheets.
  2. State function reduction is hitherto assumed to correspond always to a measurement of density matrix which can be seen as a reaction of subsystem to its environment. This makes perfect sense at space-time level. Higher level measurements occur however at the level of WCW and correspond to a localization to some sector of WCW determining for instance the quantization axes of various quantum numbers. Even the measurement of heff/h=n would measure the dimension of Galois group and force a localization to an extension with Galois group with this dimension. These measurements cannot correspond to measurements of density matrix since different WCW sectors cannot entangle by WCW locality. This finding will be discuss in the following.
The notion of self can be seen as a generalization of the poorly defined definition of the notion of observer in quantum physics. In the following I take the role of skeptic trying to be as critical as possible.

The original definition of self was as a subsystem able to remain unentangled under state function reductions associated with subsequent quantum jumps. The density matrix was assumed to define the universal observable. Note that a density matrix, which is power series of a product of matrices representing commuting observables has in the generic case eigenstates, which are simultaneous eigenstates of all observables. Second aspect of self was assumed to be the integration of subsequent quantum jumps to coherent whole giving rise to the experienced flow of time.

The precise identification of self allowing to understand both of these aspects turned out to be difficult problem. I became aware the solution of the problem in terms of ZEO (ZEO) only rather recently (2014).

  1. Self corresponds to a sequence of quantum jumps integrating to single unit as in the original proposal, but these quantum jumps correspond to state function reductions to a fixed boundary of causal diamond CD leaving the corresponding parts of zero energy states invariant - "small" state function reductions. The parts of zero energy states at second boundary of CD change and even the position of the tip of the opposite boundary changes: one actually has wave function over positions of second boundary (CD sizes roughly) and this wave function changes. In positive energy ontology these repeated state function reductions would have no effect on the state (Zeno effect) but in TGD framework there occurs a change for the second boundary and gives rise to the experienced flow of time and its arrow and self: self is generalized Zeno effect.
  2. The first quantum jump to the opposite boundary corresponds to the act of "free will" or birth of re-incarnated self. Hence the act of "free will" changes the arrow of psychological time at some level of hierarchy of CDs. The first reduction to the opposite boundary of CD means "death" of self and "re-incarnation" of time-reversed self at opposite boundary at which the the temporal distance between the tips of CD increases in opposite direction. The sequence of selves and time reversed selves is analogous to a cosmic expansion for CD. The repeated birth and death of mental images could correspond to this sequence at the level of sub-selves.
  3. This allows to understand the relationship between subjective and geometric time and how the arrow of and flow of clock time (psychological time) emerge. The average distance between the tips of CD increases on the average as along as state function functions occur repeatedly at the fixed boundary: situation is analogous to that in diffusion. The localization of contents of conscious experience to boundary of CD gives rise to the illusion that universe is 3-dimensional. The possibility of memories made possibly by hierarchy of CDs demonstrates that this is not the case. Self is simply the sequence of state function reductions at same boundary of CD remaining fixed and the lifetime of self is the total growth of the average temporal distance between the tips of CD.
One can identify several rather abstract state function reductions selecting a sector of WCW.
  1. There are quantum measurements inducing localization in the moduli space of CDs with passive boundary and states at it fixed. In particular, a localization in the moduli characterizing the Lorentz transform of the upper tip of CD would be measured. The measured moduli characterize also the analog of symplectic form in M4 strongly suggested by twistor lift of TGD - that is the rest system (time axis) and spin quantization axes. Of course, also other kinds of reductions are possible.
  2. Also a localization to an extension of rationals defining the adeles should occur. Could the value of n=heff/h be observable? The value of n for given space-time surface at the active boundary of CD could be identified as the order of the smallest Galois group containing all Galois groups assignable to 3-surfaces at the boundary. The superposition of space-time surface would not be eigenstate of n at active boundary unless localization occurs. It is not obvious whether this is consistent with a fixe value of n at passive boundary.

    The measured value of n could be larger or smaller than the value of n at the passive boundary of CD but in statistical sense n would increase by the analogy with diffusion on half line defined by non-negative integers. The distance from the origin unavoidably increases in statistical sense. This would imply evolution as increase of maximal value of negentropy and generation of quantum coherence in increasingly longer scales.

  3. A further abstract choice corresponds to the the replacement of the roles of active and passive boundary of CD changing the arrow of clock time and correspond to a death of self and re-incarnation as time-reversed self.
Can one assume that these measurements reduce to measurements of a density matrix of either entangled system as assumed in the earlier formulation of NMP, or should one allow both options. This question actually applies to all quantum measurements and leads to a fundamental philosophical questions unavoidable in all consciousness theories.
  1. Do all measurements involve entanglement between the moduli or extensions of two CDs reduced in the measurement of the density matrix? Non-diagonal entanglement would allow final states states, which are not eigenstates of moduli or of n: this looks strange. This could also lead to an infinite regress since it seems that one must assume endless hierarchy of entangled CDs so that the reduction sequence would proceed from top to bottom. It looks natural to regard single CD as a sub-Universe.

    For instance, if a selection of quantization axis of color hypercharge and isospin (localization in the twistor space of CP2) is involved, one would have an outcome corresponding to a quantum superposition of measurements with different color quantization axis!

    Going philosophical, one can also argue, that the measurement of density matrix is only a reaction to environment and does not allow intentional free will.

  2. Can one assume that a mere localization in the moduli space or for the extension of rationals (producing an eigenstate of n) takes place for a fixed CD - a kind of self measurement possible for even unentangled system? If there is entanglement in these degrees of freedom between two systems (say CDs), it would be reduced in these self measurements but the outcome would not be an eigenstate of density matrix. An interpretation as a realization of intention would be approriate.
  3. If one allows both options, the interpretation would be that state function reduction as a measurement of density matrix is only a reaction to environment and self-measurement represents a realization of intention.
  4. Self measurements would occur at higher level say as a selection of quantization axis, localization in the moduli space of CD, or selection of extension of rationals. A possible general rule is that measurements at space-time level are reactions as measurements of density matrix whereas a selection of a sector of WCW would be an intentional action. This because formally the quantum states at the level of WCW are as modes of classical WCW spinor field single particle states. Entanglement between different sectors of WCW is not possible.
  5. If the selections of sectors of WCW at active boundary of CD commute with observables, whose eigenstates appear at passive boundary (briefly passive observables) meaning that time reversal commutes with them - they can occur repeatedly during the reduction sequence and self as a generalized Zeno effect makes sense.

    If the selections of WCW sectors at active boundary do not commute with passive observables then volition as a choice of sector of WCW must change the arrow of time. Libet's findings show that conscious choice induces neural activity for a fraction of second before the conscious choice. This would imply the correspondences "big" measurement changing the arrow of time - self-measurement at the level of WCW - intentional action and "small" measurement - measurement at space-time level - reaction.

    Self as a generalized Zeno effect makes sense only if there are active commuting with passive observables. If the passive observables form a maximal set, the new active observables commuting with them must emerge. The increase of the size of extension of rationals might generate them by expanding the state space so that self would survive only as long at it evolves. Self would die and re-incarnate when it could not generate any new observables communicating with those assignable to active boundary to be measured. From personal experience I can say that ageing is basically the loss of the ability to make new choices. When all possible choices are made, all observables are measured or self-measured, it is time to start again.

    Otherwise there would be only single unitary time evolution followed by a reduction to opposite boundary. This makes sense only if the sequence of "big" reductions for sub-selves can give rise to the time flow experienced by self: the birth and death of mental images would give rise to flow of time of self.

The overall conclusion is that the notion of WCW is necessary to understand intentional free will. One must distinguish between measurements at WCW level as localizations, which do not involved measurement of density matrix and measurements space-time level reducible to measurements of density matrix (taking the density matrix to be function of product of commuting observables one can measure all these observables simultaneously by measuring density matrix. WCW localizations correspond to intentional actions - say decision fixing quantization axis for spin and space-time reductions correspond to state function reductions at the level of matter. By reading Krishnamurti I learned that eastern philosophies make a sharp distinction between behavior as mere reactivity and behavior as intentional actions which are not reactions. Furthermore, death and reincarnation happen when self has made all choices.

For details see the chapter Negentropy Maximization Principle or the article Re-examination of the basic notions of TGD inspired theory of consciousness.

Anomalies of water as evidence for dark matter in TGD sense

The motivation for this brief comment came from a popular article telling that a new phase of water has been discovered in the temperature range 50-60 oC (see this ). Also Gerald Pollack (see this ) has introduced what he calls the fourth phase of water. For instance, in this phase water consists of hexagonal layers with effective H1.5O stoichiometry and the phase has high negative charge. This phase plays a key role in TGD based quantum biology. These two fourth phases of water could relate to each other if there exist a deeper mechanism explaining both these phases and various anomalies of water.

Martin Chaplin (see this ) has an extensive web page about various properties of water. The physics of water is full of anomalous features and therefore the page is a treasure trove for anyone ready to give up the reductionistic dogma. The site discusses the structure, thermodynamics, and chemistry of water. Even academically dangerous topics such as water memory and homeopathy are discussed.

One learns from this site that the physics of water involves numerous anomalies (see this ). The structural, dynamic and thermodynamic anomalies form a nested in density-temperature plane. For liquid water at atmospheric pressure of 1 bar the anomalies appear in the temperature interval 0-100 oC.

Hydrogen bonding creating a cohesion between water molecules distinguishes water from other substances. Hydrogen bonds induce the clustering of water molecules in liquid water. Hydrogen bonding is also highly relevant for the phase diagram of H2O coding for various thermodynamical properties of water (see this ). In biochemistry hydrogen bonding is involved with hydration. Bio-molecules - say amino-acids - are classified to hydrophobic, hydrophilic, and amphiphilic ones and this characterization determines to a high extent the behavior of the molecule in liquid water environment. Protein folding represents one example of this.

Anomalies are often thought to reduce to hydrogen bonding. Whether this is the case, is not obvious to me and this is why I find water so fascinating substance.

TGD indeed suggests that water decomposes into ordinary water and dark water consisting of phases with effective Planck constant heff=n× h residing at magnetic flux tubes. Hydrogen bonds would be associated with short and rigid flux tubes but for larger values of n the flux tubes would be longer by factor n and have string tension behaving as 1/n so that they would softer and could be loopy. The portional of water molecules connected by flux tubes carrying dark matter could be identified as dark water and the rest would be ordinary water. This model allows to understand various anomalies. The anomalies are largest at the physiological temperature 37 C, which conforms with the vision about the role of dark matter and dark water in living matter since the fraction of dark water would be highest at this temperature. The anomalies discussed are density anomalies, anomalies of specific heat and compressibility, and Mpemba effect. I have discussed these anomalies already for decade ago. The recent view about dark matter allows however much more detailed modelling.

For details see the chapter Dark Nuclear Physics and Condensed Matter or the article The anomalies of water as evidence for the existence of dark matter in TGD sense.

About number theoretic aspects of NMP

There is something in NMP that I still do not understand: every time I begin to explain what NMP is I have this unpleasant gut feeling. I have the habit of making a fresh start everytime rather than pretending that everything is crystal clear. I have indeed considered very many variants of NMP. In the following I will consider two variants of NMP. Second variant reduces to a pure number theory in adelic framework inspired by number theoretic vision. It is certainly the simplest one since it says nothing explicit about negentropy. Second variant says essentially the as "strong form of NMP", when the reduction occurs to an eigen-space of density matrix.

I will not consider zero energy ontology (ZEO) related aspects and the aspects related to the hierarchy of subsystems and selves since I dare regard these as "engineering" aspects.

What NMP should say?

What NMP should state?

  1. NMP takes in some sense the role of God and the basic question is whether we live in the best possible world or not. Theologists asks why God allows sin. I ask whether NMP demand increase of negentropy always or does it allow also reduction of negentropy? Why? Could NMP lead to increase of negentropy only in statistical sense - evolution? Could it only give potential for gaining a larger negentropy?

    These questions have turned to be highly non-trivial. My personal experience is that we do not live in the best possible world and this experience plus simplicity motivates the proposal to be discussed.

  2. Is NMP a separate principle or could NMP be reduced to mere number theory? For the latter option state function would occur to an eigenstate/eigenspace of density matrix only if the corresponding eigenvalue and eigenstate/eigenspace are expressible using numberes in the extension of rationals defining the adele considered. A phase transition to an extension of an extension containing these coefficients would be required to make possible reduction. A step in number theoretic evolution would occur. Also an entanglement of measured state pairs with those of measuring system in containing the extension of extension would make possible the reduction. Negentropy would be reduced but higher-D extension would provide potential for more negentropic entanglement. I will consider this option in the following.
  3. If one has higher-D eigenspace of density matrix, p-adic negentropy is largest for the entire subspace and the sum of real and p-adic negentropies vanishes for all of them. For negentropy identified as total p-adic negentropy strong from of NMP would select the entire sub-space and NMP would indeed say something explicit about negentropy.

The notion of entanglement negentropy

  1. Number theoretic universality demands that density matrix and entanglement coefficients are numbers in an algebraic extension of rationals extended by adding root of e. The induced p-adic extensions are finite-D and one obtains adele assigned to the extension of rationals. Real physics is replaced by adelic physics.
  2. The same entanglement in coefficients in extension of rationals can be seen as numbers is both real and various p-adic sectors. In real sector one can define real entropy and in various p-adic sectors p-adic negentropies (real valued).
  3. Question: should one define total entanglement negentropy as
    1. sum of p-adic negentropies or
    2. as difference for the sum of p-adic negentropies and real etropy. For rational entanglement probabilities real entropy equals to the sum of p-adic negentropies and total negentropy would vanish. For extensions this negentropy would be positive under natural additional conditions as shown earlier.
    Both options can be considered.

State function reduction as universal measurement interaction between any two systems

  1. The basic vision is that state function reductions occur all the for all kinds of matter and involves a measurement of density matrix ρ characterizing entanglement of the system with environment leading to a sub-space for which states have same eigenvalue of density matrix. What this measurement really is is not at all clear.
  2. The measurement of the density matrix means diagonalization of the density matrix and selection of an eigenstate or eigenspace. Diagonalization is possible without going outside the extension only if the entanglement probabilities and the coefficients of states belong to the original extension defining the adele. This need not be the case!

    More precisely, the eigenvalues of the density matrix as roots of N:th order polynomial with coefficients in extension in general belong to N-D extension of extension. Same about the coefficients of eigenstates in the original basis. Consider as example the eigen values and eigenstates of rational valued N× N entanglement matrix, which are roots of a polynomial of degree N and in general algebraic number.

    Question: Is state function reduction number theoretically forbidden in the generic case? Could entanglement be stable purely number theoretically? Could NMP reduce to just this number theoretic principle saying nothing explicit about negentropy? Could phase transition increasing the dimension of extension but keeping the entanglement coefficients unaffected make reduction possible. Could entanglement with an external system in higher-D extension -intelligent observer - make reduction possible?

  3. There is a further delicacy involved. The eigen-space of density matrix can be N-dimensional if the density matrix has N-fold degenerate eigenvalue with all N entanglement probabilities identical. For unitary entanglement matrix the density matrix is indeed N×N unit matrix. This kind of NE is stable also algebraically if the coefficients of eigenstates do not belong to the extension. If they do not belong to it then the question is whether NMP allows a reduction to subspace of and eigen space or whether only entire subspace is allowed.

    For total negentropy identified as the sum of real and p-adic negentropies for any eigenspace would vanish and would not distinguish between sub-spaces. Identification of negentropy as as p-adic negentropy would distinguish between sub-spaces and´NMP in strong form would not allow reduction to sub-spaces. Number theoretic NMP would thus also say something about negentropy.

    I have also consider the possibility of weak NMP. Any subspace could be selected and negentropy would be reduced. The worst thing to do in this case would be a selection of 1-D subspace: entanglement would be totally lost and system would be totally isolated from the rest of the world. I have proposed that this possibility corresponds to the fact that we do not seem to live in the best possible world.

NMP as a purely number theoretic constraint?

Let us consider the possibility that NMP reduces to the number theoretic condition tending to stabilize generic entanglement.

  1. Density matrix characterizing entanglement with the environment is a universal observable. Reduction can occur to an eigenspace of the density matrix. For rational entanglement probabilities the total negentropy would vanish so that NMP formulated in terms of negentropy cannot say anything about the situation. This suggests that NMP quite generally does not directly refer to negentropy.
  2. The condition that eigenstates and eigenvalues are in the extension of rationals defining the adelic physics poses a restriction. The reduction could occur only if these numbers are in the original extension. Also rational entanglement would be stable in the generic case and a phase transition to higher algebraic extension is required for state function reduction to occur. Standard quantum measurement theory would be obtained when the coefficients of eigenstates and entanglement probabilities are in the original extension.
  3. If this is not the case, a phase transition to an extension of extension containing the N-D extension of it could save the situation. This would be a step in number theoretic evolution. Reduction would lead to a reduction of negentropy but would give potential for gaining a larger entanglement negentropy. Evolution would proceed through catastrophes giving potential for more negentropic entanglement! This seems to be the case!

    Alternatively, the state pairs of the system + complement could be entangled with observer in an extension of rationals containg the needed N-D extension of extension and state function possible for observer would induce reduction in the original system. This would mean fusion with a self at higher level of evolutionary hierarchy - kind of enlightment. This would give an active role to the intelligent observer (intelligence characterized by the dimension of extension of rationals). Intelligent observer would reduce the negentropy and thus NMP would not hold true universally.

    Since higher-D extension allows higher negentropy and in the generic case NE is stable, one might hope that NMP holds true statistically (for rationals total negentropy as sum or real and total p-adic negentropies vanishes).

    The Universe would evolve rather than being a paradize: the number theoretic NMP would allow temporary reduction of negentropy but provide a potential for larger negentropy and the increase of negentropy in statistical sense is highly suggestive. To me this option looks like simplest and most realistic one.

  4. If negentropy is identified as total p-adic negentropy rather than sum of real and p-adic negentropies, strong form of NMP says something explicit about negentropy: the reduction would take place to the entire subspace having the largest p-adic negentropy.

For background see the chapter Negentropy Maximization Principle. or the article About number theoretic aspects of NMP.

To the index page