What's new in


Note: Newest contributions are at the top!

Year 2013

Negentropic entanglement, NMP, braiding and TQC

Negentropic entanglement for which number theoretic entropy characterized by p-adic prime is negative so that entanglement carries information, is in key role in TGD inspired theory of consciousness and quantum biology.

  1. The key feature of negentropic entanglement is that density matrix is proportional to unit matrix so that the assumption that state function reduction corresponds to the measurement of density matrix does not imply state function reduction to one-dimensional sub-space. This special kind of degenerate density matrix emerges naturally for the hierarchy heff=nh interpreted in terms of a hierarchy of dark matter phases. I have already earlier considered explicit realizations of negentropic entanglement assuming that the entanglement matrix is invariant under the group of unitary or orthogonal transformations (also subgroups of unitary group can be considered - say symplectic group). One can however consider much more general options and this leads to a connection with topological quantum computation (TQC).

  2. Entanglement matrices E equal to n-1/2 factor times unitary matrix U (as a special case to orthogonal matrix O) defines a density matrix given by ρ=UU/n= Idn/n, which is group invariant. One has negentropic entanglement (NE) respected by state function reduction if Negentropy Maximization Principle (NMP) is assumed. This would give huge number of negentropically entangled states providing a representation for some unitary group or its subgroup (such as symplectic group). In principle any unitary representation of any Lie group would allow representation in terms of NE.

  3. In physics as generalized number theory vision, a natural condition is that the matrix elements of E belong to the algebraic extension of p-adic numbers used so that discreted algebraic subgroups of unitary or orthogonal group are selected. This realizes evolutionary hierarchy as a hierarchy of p-adic number fields and their algebraic extensions, and one can imagine that evolution of cognition proceeds by the generation of negentropically entangled systems with increasing algebraic dimensions and increasing dimension reflecting itself as an increase of the largest prime power dividing n and defining the p-adic prime in question.

  4. One fascinating implication is the ability of TGD Universe to emulate itself like Turing machine: unitary S-matrix codes for scattering amplitudes and therefore for physics and negentropically entangled subsystem could represent sub-matrix for S-matrix as rules representing "the laws of physics" in the approximation that the world corresponds to n-dimension Hilbert space. Also the limit n → ∞ makes sense, especially so in the p-adic context were real infinity can correspond to finite number in the sense of p-adic norm. Here also dimensions n given as products of powers of infinite primes can be formally considered.

One can consider various restrictions on E.

  1. In 2-particle case the stronger condition that E is group invariant implies that unitary matrix is identity matrix apart from an overall phase factor: U= exp(iφ)Id. In orthogonal case the phase factor is +/- 1. For n-particle NE one can consider group invariant states by using n-dimensional permutation tensor εi1,...in.
  2. One can give up the group invariance of E and consider only the weaker condition that permutation is represented as transposition of entanglement matrix: Cij→ Cij. Symmetry/antisymmetry under particle exchange would correspond to Cji=ε Cij, ε=+/- 1. This would give in orthogonal case OOT= O2=Id and UU*= Id in the unitary case.

    In the unitary case particle exchange could be also identified as hermitian conjugation Cij→ Cji* and one would have also now U2=Id. Euclidian gamma matrices γi define unitary and hermitian generators of Clifford algebra having dimension 22m for n=2m and n=2m+1. It is relatively easy to verify that the squares of completely anti-symmetrized products of k gamma matrices representing exterior algebra normalized by factor 1/k!1/2 are equal to unit matrix. For k=n the antisymmetrized product gives essentially permutation symbol times the product ∏k γk. In this manner one can construct entanglement matrices representing negentropic bi-partite entanglement.

  3. The possibility of taking tensor products εij..k...nγi⊗ γj..⊗ γk of k gamma matrices means that one can has also co-product of gamma matrices. What is interesting is that quantum groups important in topological quantum computation as well as the Yangian algebra associated with twistor Grassmann approach to scattering amplitudes possess co-algebra structure. TGD leads also to the proposal that this structure plays a central role in the construction of scattering amplitudes. Physically the co-product is time reversal of product representing fusion of particles.
  4. One can go even further. In 2-dimensional QFTs braid statistics replaces ordinary statistics. The natural question is what braid statistics could correspond to at the level of NE. Braiding matrix is unitary so that it defines NE. Braiding as a flow replaces the particle exchange and lifts permutation group to braid group serving as its infinite covering. The allowed unitary matrices representing braiding in tensor product are constructed using braiding matrix R representing the exchange for two braid strands? The well-known Yang-Baxter equation for R defined in tensor product as an invertible element (see this) expresses the associativity of braiding operation. Concretely it states that the two braidings leading from 123 to 321 produce the same result. Entanglement matrices constructed R as basic operation would correspond to unitary matrices providing a representation for braids and each braid would give rise to one particular NE.

    This would give a direct connection with TQC for which the entanglement matrix defines a density matrix proportional to n× n unit matrix: R defines the basic gate (see this). Braids would provide a concrete representation for NE giving rise to "Akashic records". I have indeed proposed the interpretation of braidings as fundamental memory representations much before the vision about Akashic records. This kind of entanglement matrix need not represent only time-like entanglement but can be also associated also with space-like entanglement. The connection with braiding matrices supports the view that magnetic flux tubes are carriers of negentropically entangled matter and also suggests that this kind of entanglement between - say - DNA and nuclear or cell membrane gives rise to TQC.

Some comments concerning the covering space degrees of freedom associated with heff=nh viz. ordinary degrees of freedom are in order.
  1. Negentropic entanglement with n entangled states would correspond naturally to heff=nh and is assigned with "many-particle" states, which can be localized to the sheets of covering but one cannot exclude similar entanglement in other degrees of freedom. Group invariance leaves only group singlets and states which are not singlets are allowed only in special cases. For instance for SU(2) the state kenovert |j,m >kenorangle= | 1,0 > represented as 2-particle state of 2 spin 1/2 particles is negentropically entangled whereas the states | j,m >= |1,+/- 1 > are pure.
  2. Negentropic entanglement associated with heff=nh could factorize as tensor product from other degrees of freedom. Negentropic entanglement would be localised to the covering space degrees of freedom but there would be entropic entanglement in the ordinary degrees of freedom - say spin. The large value of heff would however scale up the quantum coherence time and length also in the ordinary degrees of freedom. For entanglement matrix this would correspond to a direct sum proportional to unitary matrices so that also density matrix would be a direct sum of matrices pn En= pn Idn/n , ∑ pn=1 correspond ing to various values of "other quantum numbers", and state function reduction could take place to any subspace in the decomposition. Also more general entanglement matrices for which the dimensions of direct summands vary, are possible.
  3. One can argue that NMP does not allow halting of quantum computation. The counter argument would be that the halting is not needed if it is indeed possible to deduce the structure of negentropically entangled state by an interaction free quantum measurement replacing the state function reduction with "externalised" state function reduction. One could speak of interaction free TQC. This TQC would be reading of "Akashic records". NE should be able to induce a conscious experience about the outcome of TQC which in the ordinary framework is represented by the reduction probabilities for various possible outcomes.

    One could also counter argue that NMP allows the transfer of NE from the system so that TQC halts. NMP allows this if some another system receives at least the negentropy contained by NE. The interpretation would be as the increase of information obtained by a conscious observer about the outcome of halted quantum computation. It am not able to imagine how this could happen at the level of details.

For details and background see the section "Updates since 2012" of chapter "Negentropy Maximization Principle" and the article "Negentropic entanglement, NMP, braiding and topological quantum computation".

NMP and intelligence

Alexander Wissner-Gross, a physicist at Harvard University and the Massachusetts Institute of Technology, and Cameron Freer, a mathematician at the University of Hawaii at Manoa, have developed a theory that they say describes many intelligent or cognitive behaviors, such as upright walking and tool use (see this and this ). The basic idea of the theory is that intelligent system collects information about large number of histories and preserves it. Thermodynamically this means large entropy so that the evolution of intelligence would be rather paradoxically evolution of highly entropic systems. According to standard view about Shannon entropy transformation of entropy to information (or the reduction of entropy to zero) requires a process selecting one of instances of thermal ensemble with a large number of degenerate states and one can wonder what is this selection process. This sounds almost like a paradox unless one accepts the existence of this process. I have considered the core of this almost paradox in TGD framework already earlier.

According to the popular article (see this) the model does not require explicit specification of intelligent behavior and the intelligent behavior relies on "causal entropic forces" (here one can counter argue that the selection process is necessary if one wants information gain). The theory requires that the system is able to collect information and predict future histories very quickly.

The prediction of future histories is one of the basic characters of life in TGD Universe made possible by zero energy ontology (ZEO) predicting that the thermodynamical arrow of geometric time is opposite for the quantum jumps reducing the zero energy state at upper and lower boundaries of causal diamond (CD) respectively. This prediction means quite a dramatic deviation from standard thermodynamics but is consistent with the notion of syntropy introduced by Italian theoretical physicist Fantappie already for more than half a century ago as well as with the reversed time arrow of dissipation appearing often in living matter.

The hierarchy of Planck constants makes possible negentropic entanglement and genuine information represented as negentropic entanglement in which superposed state pairs have interpretation as incidences ai↔ bi of a rule A↔ B: apart from possible phase the entanglement coefficients have same value 1/n1/2, where n=heff/h define the value of effective Planck constant and dimension for the effective covering of imbedding space. This picture generalizes also to the case of multipartite entanglement but predicts similar entanglement for all divisions of the system to two parts. There are however still some questions which are not completely settled and leave some room for imagination.

  1. Negentropic entanglement is possible in the discrete degrees of freedom assignable to the n-fold covering of imbedding space allowing to describe situation formally. For heff/h=n one can introduce SU(n) as dynamical symmetry group and require that n-particle states are singlets under SU(n). This gives rise to n-particle states constructed by contracting n-dimensional permutation symbol contracted with many particle states assignable to the m factors. Spin-statistics connection might produce problems - at least it is non-trivial - since one possible interpretation is that the states carry fractional quantum numbers- in particular fractional fermion number and charges.

    These states generalize the notion of N-atom proposed earlier as emergence of symbols and "sex" at molecular level (see this. "Molecular sex" means that all states can be seen as composites of two states with opposite fractional SU(n) quantum numbers (this decomposition need not be unique!). This brings in mind the monogamy theorem for ordinary entanglement stating that maximal entanglement means this kind of decomposition to two parts.

  2. Is negentropic entanglement possible only in the new covering degrees of freedom or is it possible in more familiar angular momentum, electroweak, and color degrees of freedom?
    1. One can imagine that also states that are singlets with respect to rotation group SO(3) and its covering SU(2) (2-particle singlet states constructed from two spin 1 states and spin singlet constructed from two fermions) could carry negentropic entanglement. The latter states are especially interesting biologically.
    2. In TGD framework all space-time surfaces can be seen at least 2-fold coverings of M4 locally since boundary conditions do not seem to allow 3-surfaces with spatial boundaries so that finiteness of the space-time sheet requires covering structure in M4. This forces to ask whether this double covering could provide a geometric correlate for fermionic spin 1/2 suggested by quantum classical correspondence taken to extreme. Fermions are indeed fundamental particles in TGD framework and it would be nice if also 2-sheeted coverings would define fundamental building bricks of space-time.
    3. Color group SU(3) for which color triplets defines singlets can be also considered. I have been even wondering whether quark color could actually correspond to 3-fold or 6-fold (color isospin corresponds to SU(2)) covering so that quarks would be dark leptons, which correspond n=3 coverings of CP2 and to fractionization of hypercharge and electromagnetic charge. The motivation came from the inclusions of hyper-finite factors of type II1 labelled by integer n≥ 3. If this were the case then only second H-chirality would be realized and leptonic spinors would be enough. What this would mean from the point of view of separate B and L conservation remains an open and interesting question. This kind of picture would allow to consider extremely simple genesis of matter from right-handed neutrinos only (see .

      There are two objections against this naive picture. The fractionization associated with heff should be same for all quantum numbers so that different fractionizations for color isospin and color hyper charge does not seem to be possible. One can of course ask whether the different quantum numbers could be fractionized independently and what this could mean geometrically. Second, really lethal looking objection is that fractional quark charges involve also shift of em charge so that neutrino does not remain neutral it becomes counterpart of u quark.

Negentropy Maximization Principle (NMP) resolves also the above mentioned almost paradox related to entropy contra intelligence. I have proposed analogous principle but relying on generation of negentropic entanglement and replacing entropy with number theoretic negentropy obeying modification of Shannon formula involving p-adic norm in the logarithm log(|p|p) of probability. The formula makes sense for probabilities which are rational or in algebraic extension of rational numbers and requires that the system is in the intersection of real and p-adic worlds. The dark matter matter with integer value of Planck constant and heff=nh predicts rational entanglement probabilities: their values are simply pi=1/n since the entanglement coefficients define a diagonal matrix proportional to unit matrix. Negentropic entanglement makes sense also for n-particle systems.

Negentropic entanglement corresponds therefore always to n× n density matrix proportional to unit matrix: this means maximal entanglement and maximal number theoretic entanglement negentropy for two entangled systems with number n of entangled states. n corresponds to Planck constant heff= n×h so that a connection with hierarchy of Planck constants is also obtained. Power of p-adic prime defines the largest prime power divisor of n. Individually negentropically entangled systems would be very entropic since there would be n energy-degenerate states with the same Boltzmann weight. Negentropic entanglement changes the situation: thermodynamics of course does not apply anymore. Hence TGD produces same prediction as thermodynamical model but avoids the almost paradox.

For details and background see the section "Updates since 2012" of chapter "Negentropy Maximization Principle".

More precise formulation of NMP

Negentropy Maximization Principle (NMP) is assumed to be the variational principle telling what can happen in quantum jump and says that the information content of conscious experience for the entire system is maximized. In zero energy ontology (ZEO) the definition of NMP is far from trivial and the recent progress - as I believe - in the understanding of structure of quantum jump forces to check carefully the details related to NMP. A very intimate connection between quantum criticality, life as something in the intersection of realities and p-adicities, hierarchy of effective vales of Planck constant, negentropic entanglement, and p-adic view about cognition emerges. One ends up also with an argument why p-adic sector is necessary if one wants to speak about conscious information.

The anatomy of quantum jump in zero energy ontology (ZEO)

Zero energy ontology emerged around 2005 and has had profound consequences for the understanding of quantum TGD. The basic implication is that state function reductions occur at the opposite light-like boundaries of causal diamonds (CDs) forming a hierarchy, and produce zero energy states with opposite arrows of imbedding space time. Also concerning the identification of quantum jump as moment of consciousness ZEO encourages rather far reaching conclusions. In ZEO the only difference between motor action and sensory representations on one hand, and intention and cognitive representation on the other hand , is that the arrows of imbedding space time are opposite for them. Furthermore, sensory perception followed by motor action corresponds to a basic structure in the sequence of state function reductions and it seems that these processes occur fractally for CDs of various size scales.

  1. State function reduction can be performed to either boundary of CD but not both simultaneously. State function reduction at either boundary is equivalent to state preparation giving rise to a state with well defined quantum numbers (particle numbers, charges, four-momentum, etc...) at this boundary of CD. At the other boundary single particle quantum numbers are not well defined although total conserved quantum numbers at boundaries are opposite by the zero energy property for every pair of positive and negative energy states in the superposition. State pairs with different total energy, fermion number, etc.. for other boundary are possible: for instance, t coherent states of super-conductor for which fermion number is ill defined are possible in zero energy ontology and do not break the super-selection rules.
  2. The basic objects coding for physics are U-matrix, M-matrices and S-matrix. M-matrices correspond to a orthogonal rows of unitary U-matrix between zero energy states, and are expressible as products of a hermitian square root of density matrix and of unitary S-matrix which more or less corresponds to ordinary S-matrix. One can say that quantum theory is formally a square root of thermodynamics. The thermodynamics in question would however relate more naturally to NMP rather than second law, which at ensemble level and for ordinary entanglement can be seen as a consequence of NMP.

    The non-triviality of M-matrix requires that for given state reduced at say the "lower" boundary of CD there is entire distribution of statesat "upper boundary" (given initial state can lead to a continuum of final states). Even more, all size scales of CDs are possible since the position of only the "lower" boundary of CD is localized in quantum jump whereas the location of upper boundary of CD can vary so that one has distribution over CDs with different size scales and over their Lorentz boots and translates.

  3. The quantum arrow of time follows from the asymmetry between positive and negative energy parts of the state: the other is prepared and the other corresponds to the superposition of the final states resulting when interactions are turned on. What is remarkable that the arrow of time at imbedding space level at least changes direction when quantum jump occurs to opposite boundary.

    This brings strongly in mind the old proposal of Fantappie that in living matter the arrow of time is not fixed and that entropy and its diametric opposite syntropy apply to the two arrows of the imbedding space time. The arrow of subjective time assignable to second law would hold true but the increase of syntropy would be basically a reflection of second law since only the arrow of the geometric time at imbedding space level has changed sign. The arrow of geometric at space-time level which conscious observer experiences directly could be always the same if quantum classical correspondence holds true in the sense that the arrow of time for zero energy states corresponds to arrow of time for preferred extremals. The failure of strict non-determinism making possible phenomena analogous to multifurcations makes this possible.

  4. This picture differs radically from the standard view and if quantum jump represents a fundamental algorith, this variation of the arrow of geometric time from quantum jump to quantum jump should manifest itself in the functioning of brain and living organisms. The basic building brick in the functioning of brain is the formation of sensory representation followed by motor action. These processes look very much like temporal mirror images of each other such as the state function reductions to opposite boundaries of CD look like. The fundamental process could correspond to a sequences of these two kinds of state function reductions for opposite boundaries of CDs and maybe independently for CDs of different size scales in a "many-particle" state defined by a union of CDs.
How the formation of cognitive and sensory representations could relate to quantum jump?
  1. ZEO allows quantum jumps between different number fields so that p-adic cognitive representations can be formed and intentional actions realized. How these quantum jumps are realized at the level of generalized Feynman diagrams is non-trivial question: one possibility suggested by the notion of adele combining reals and various p-adic number fields to a larger structure is that the lines and vertices of generalized Feynman diagrams can correspond to different number fields.

    The formation of cognitive representation could correspond to a quantum jump in which real space-time sheet identified as a preferred extremal is mapped to its p-adic counterpart or superposition of them with the property that the discretized versions of all p-adic counterparts are identical. In the latter case the chart map of real preferred extremal would be quantal and correspond to delocalized state in WCW. The p-adic chart mappings are not expected to take place but with some probabilities determined by the number theoretically universal U-matrix.

  2. Similar consideration applies to intentional actions realized as real chart maps for p-adically realized intention. The natural interpretation of the process is as a time reversal of cognitive map. Cognitive map would be generated from real sensory represention and intentional action would transform time reversed cognitive map to real "motor" action identifiable as time reversal of sensory perception. This would occur in various length scales in fractal manner.
  3. The formation of superpositions of preferred extremals associated with discrete p-adic chart maps from real preferred extremals could be interpretated as an abstraction process. Similar abstraction could take place also in the mapping of p-adic space-time surface to a superposition of real preferred extrmals representing intentional action. U-matrix should give also the probability amplitudes for these processes, and the intuitive idea is that the larger then number of common rational and algebraic points of real and p-adic surfaces is, the higher the probability for this is: the first guess is that the amplitude is proportional the number of common points. On the other hand, large number of common points means high measurement resolution so that the number of different surfaces in superposition tends to be smaller.
  4. One should not make any un-necessary assumptions about the order of various kinds of quantum jumps. For the most general option real-to-padic and p-adic-to-real quantum jumps can follow any quantum jumps and state function reductions to opposite boundaries of CD can also occur any time in any length scale. Also the length scale of resolution scale assignable to the cognitive representation should be determined probabilistically. Quantal probabilities for quantum jumps should therefore apply to all aspect of quantum jump and now ad hoc assumptions should be made. Very probably internal consistency allows only very few alternative scenarios. The assumption that the cascade beginning from given CD continues downwards until stops due to the emergence of negentropic entanglement looks rather natural constraint.

What happens in single state function reduction?

State function reduction is a measurement of density matrix. The condition that a measurement of density matrix takes place implies standard measurement theory on both real and p-adic sectors: system ends to an eigen-space of density matrix. This is true in both real and p-adic sectors. NMP is stronger principle at the real side and implies state function reduction to 1-D subspace - its eigenstate.

The resulting N-dimensional space has however rational entanglement probabilities p=1/N so that one can say that it is the intersection of realities and p-adicities. If the number theoretic variant of entanglement entropy is used as a measure for the amount of entropy carried by entanglement rather than either entangled system, the state carries genuine information and is stable with respect to NMP if the p-adic prime p divides N. NMP allows only single p-adic prime for real → p-adic transition: the power of this prime appears is the largest power of prime appearing in the prime decomposition of N. Degeneracy means also criticality so that that ordinary quantum measurement theory for the density matrix favors criticality and NMP fixes the p-adic prime uniquely.

If one - contrary to the above conclusion - assumes that NMP holds true in the entire p-adic sector, NMP gives in p-adic sector rise to a reduction of the negentropy in state function reduction if the original situation is negentropic and the eigen-spaces of the density matrix are 1-dimensional. This situation is avoided if one assumes that state function reduction cascade in real or genuinely p-adic sector occurs first (without NMP) and gives therefore rise to N-dimensional eigen spaces. The state is negentropic and stable if the p-adic prime p divides N. Negentropy is generated.

The real state can be transformed to a p-adic one in quantum jump (defining cognitive map) if the entanglement coefficients are rational or belong to an algebraic extension of p-adic numbers in the case that algebraic extension of p-adic numbers is allowed (number theoretic evolution gradually generates them). The density matrix can be expressed as sum of projection operators multiplied by probabilities for the projection to the corresponding sub-spaces. After state function reduction cascade the probabilities are rational numbers of form p=1/N.

Number theoretic entanglement entropy also allows to avoid some objections related to fermionic and bosonic statistics. Fermionic and bosonic statistics require complete anti-symmetrization/symmetrization. This implies entanglement which cannot be reduced away. By looking for symmetrized or antisymmetrized 2-particle state consisting of spin 1/2 fermions as the simplest example one finds that the density matrix for either particle is the simply unit 2× 2 matrix. This is stable under NMP based on number theoretic negentropy. One expects that the same result holds true in the general case. The interpretation would be that particle symmetrization/antisymmetrization carries negentropy.

The degeneracy of the density matrix is of course not a generic phenomenon and one can argue that it corresponds to some very special kind of physics. The identification of space-time correlates for the hierarchy for the effective values hbareff=n×hbar of Planck constant as n-furcations of space-time sheet suggests strongly the identification of this physics in terms of this hierarchy. Hence quantum criticality, the essence of life as something in the rational intersection of realities and p-adicities, the hierarchy of effective values of hbar, negentropic quantum entanglement, and the possibility to make real-p-adic transitions and thus cognition and intentionality would be very intimately related. This is a highly satisfactory outcome, since these ideas have been rather loosely related hitherto.

What happens in quantum jump?

Suppose that everything can be reduced to what happens for a given CD characterized by a scale. There are at least two questions to be answered.

  1. There are two processes involved. State function reduction and quantum jump transforming real state to p-adic state (matter to cognition) and vice versa (intention to action). Do these transitions occur independently or not? Does the ordering of the processes matter? The proposed view about state function reduction strongly suggests that the p-adic ↔real transition (if possible at all) can occur any time without affecting the outcome of the state function reduction.
  2. State function reduction cascade in turn consists of two different kinds of state function reductions. The M-matrix characterizing the zero energy state is product of square root of density matrix and of unitary S-matrix and the first step means the measurement of the projection operator. It defines a density matrix for both upper and lower boundary of CD and these density matrices are essentially same.
    1. At the first step a measurement of the density matrix between positive and negative energy parts of the quantum state takes place for CD. One can regard both the lower and upper boundary as an eigenstate of density matrix in absence of negentropic entanglement. The measurement is thus completely symmetric with respect to the boundaries of CDs. At the real sector this leads to a 1-D eigen-space of density matrix if NMP holds true. In the intersection of real and p-adic sectors this need not be the case if the eigenvalues of the density matrix have degeneracy. Zero energy state becomes stable against further state function reductions! The interactions with the external world can of course destroy the stability sooner or later. An interesting question is whether so called higher states of consciousness relate to this kind of states.
    2. If the first step gave rise to 1-D eigen-space of the density matrix, a state function reduction cascade at either upper of lower boundary of CD proceeding from long to short scales. At given step divides the sub-system into two systems and the sub-system-complement pair which produces maximum negentropy gain is subject to quantum measurement maximizing negentropy gain. The process stops at given subsystem resulting in the process if the resulting eigen-space is 1-D or has negentropic entanglement (p-adic prime p divides the dimension N of eigenspace in the intersection of reality and p-adicity).

For details and background see the section "Updates since 2012" of chapter "Negentropy Maximization Principle" of "TGD Inspired Theory of consciousness".

The TGD variant of the model of Widom and Larsen for cold fusion

Widom and Larsen (for articles see the Widom Larsen LENR Theory Portal) ) have proposed a theory of cold fusion (LENR), which claims to predict correctly the various isotope ratios observed in cold fusion and accompanying nuclear transmutations. The ability to predict correctly the isotope ratios suggests that the model is on the right track. A further finding is that the predicted isotope ratios correspond to those appearing in Nature which suggests that LENR is perhaps more important than hot fusion in solar interior as far as nuclear abundances are considered. TGD leads to the same proposal and Lithium anomaly could be understood as one implication of LENR (see this). The basic step of the reaction would rely on weak interactions: the proton of hydrogen atom would transform to neutron by capturing the electron and therefore would overcome the Coulomb barrier.

Challenges of the model

The model has to meet several challenges.

  1. The electron capture reaction p+e→ n+ν is not possible for ordinary atom since the mass difference of neutron is 1.3 MeV and larger than electron mass .5 MeV (electron has too small kinetic energy). The proposal is that strong electric fields at the catalyst surface imply renormalization effects for the plasmon phase at the surface of the catalyst increasing electron mass so that it has width of few MeVs (see this. Physically this would mean that strong em radiation helps to overcome the kinematical threshold for the reaction. This assumption can be criticized: the claim is that the mass renormalization is much smaller than claimed by Widom and Larsen.
  2. Second problem is that weak interactions are indeed very weak. The rate is proportional to 1/mW4, mW∼ 100 GeV whereas for the exchange of photon with energy E it would be proportional to 1/E4. For E∼ 1 keV the ratio of the rates would be of the order of 10-48!

    This problem could be circumvented if the transition from proton to neutron occurs coherently for large enough surface patch. This would give rate proportional to N2, where N is the number electrons involved. Another mechanism hoped to help to get high enough reaction rate is based on the assumption that the neutron created by the capture process has ultra-low momentum. This is the case if the mass renormalization of electron is such that the energies of the neutrons produced in the reaction are just above the kinematical threshold. Note however that this reduces the electon capture cross section. The argument is that the absorption rate for neutron by target nucleus is by very general arguments proportional to 1/vn, vn the velocity of neutron. Together these two mechanisms are hoped to give high enough rate for cold fusion.

  3. The model must also explain why gamma radiation is not observed and why neutrons are produced much less than expected. Concerning gamma rays one must assume that the heavy electrons of the plasmon phase assigned to the surface of the catalyst absorb the gamma rays and re-emit them as infrared light emitted to environment as heat. Ordinary electrons cannot absorb gamma rays but heavy electrons can (see this), and the claim is that they do transform gamma rays to infrared photons. If the neutrons created in LENR have ultra-low energies their capture cross sections are enormous and the claim is that they do not get out of the system.
The assumption that electron mass is renormalized so that the capture reaction can occur but occurs only very near threshold so that the resulting neutrons are ultraslow has been criticized (see this).

TGD variant of the model

TGD allows to consider two basic approaches to the LENR.

  1. Option I involves only dark nucleons and dark quarks. In this case, one can imagine that the large Compton length of dark proton - at least of order atomic scale - implies that it overlaps target nucleus, which can see the negatively charged d quark of the proton so that instead of Coulomb wall one has Coulomb well.
  2. Option II involves involves both dark weak bosons and possibly also dark nucleons and dark electrons. The TGD inspired model for living matter - in particular, the model for cell membrane involving also Z0 membrane potential in the case of sensory receptor neurons (see this) - favors the model involving both dark weak bosons, nucleons, and even electrons. Chiral selection for biomolecules is extremely difficult to understand in standard model but could be understood in terms of weak length scale of order atomic length scale at least: below this scale dark weak bosons would be effectively massless and weak interactions would be as strong as em interactions. The model for electrolysis based on plasmoids identified as primitive life forms supports also this option. The presence of dark electrons is suggested by Tesla's cold currents and by the model of cell membrane.

    This option is fixed quantitatively by the condition that the Compton length of dark weak bosons is of the order of atomic size scale at least. The ratio of the corresponding p-adic size scales is of order 107 and therefore one has heff∼ 1014. The condition that heff/h=2k guarantees that the phase transion reducing heff to h and increasing p-adic prime p by about 2k and p-adic length scale by 2k/2 does not change the size scale of the space-time sheet and liberates cyclotron magnetic energy En(1-2-k) ≈ En.

Consider next Option II by requiring that the Coulomb wall is overcome via the transformation of proton to neutron. This would guarantee correct isotope ratios for nuclear transmutations. There are two options to consider depending on whether a) the W boson is exchanged between proton nucleus (this option is not possible in standard model) or b) between electron and proton (the model of Widom and Larsen relying on the critical massivation of electron).
  1. Option II.1. Proton transforms to neutron by exchanging W boson with the target nucleus.
    1. In this case kinematics poses no obvious constraints on the process. There are two options depending on whether the neutron of the target nucleus or quark in the neutral color bond receives the W boson.
    2. If electron and proton are dark with heff/h=n=2k in the range [1012,1014] the situation can change since W boson has its usual mass from the point of view of electron and proton. hbar4/mW4 factor in differential cross section for 2-to-2 scattering by W exchange is scaled up by n4 (see the appendix of kenocitebthe/Iztykson so that effectively mW would be of order 10 keV for ordinary hbar.
    3. One can argue that in the volume defined by proton Compton length λp≈ 2-11 λe ∈ [1.2, 12] nm one has a superposition of amplitudes for the absorption of dark proton by nucleus. If there are N nuclei in this volume, the rate is proportional to N2. One can expect at most N∈ [103,106] target nuclei in this volume. This would give a factor in the range 109-1012.
  2. Option II.2: Electron capture by proton is the Widom-Larsen candidate for the reaction in question. As noticed, this process cannot occur unless one assumes that the mass of electron is renormalized to have a value in a range of few MeV. If dark electrons are heavier than ordinary, the process could be mediated by W boson exchange and if the electron and proton have their normal sizes the process occurs with same rate as em processes.

    If electron and proton are dark with heff/h=n ∈ [1012,1014] the situation can change since W boson has its usual mass from the point of view of electron and proton. 2-to-2 cross section is proportional to hbar4 and is scaled up by n4. One the other hand, the naive expectation is that |Ψ (0)|2∝ me3/heff3 ∝ 1/n-3 for electron is scaled by n-3 so that the rate is increased by a factor of order n∈ [1012,1014] (electron Compton length is of order cell size scale! instead of Angstrom) from its ordinary value. This is not enough.

    On the other hand, one can argue in the volume defined by proton Compton size one has a superposition of amplitudes for the absorption of electron. If there are N dark electrons in this volume, the rate is proportional to N2. One can expect at most 106 dark electrons in the volume of scale 10 nm so that this could give a factor 1012. This would give amplification factor 1026 to the weak rate so that it would be only by two orders of magnitude smaller than the rate for massless weak bosons.

There are also other strange features to be understood.
  1. The absence of gamma radiation could be due to the fact that the produced gamma rays are dark. For heff/h ∈ [1012,1014] the energy frequency of 1 MeV dark gamma ray would correspond to that of photon with energy of [1,.1] μeV and thus to radiowave photon with wavelength of order 1 m and frequency of orer 3× 108 Hz. In Widom-Larsen model the photons would be infrared photons. The decay of the dark gamma ray to a bunch of ordinary radiowave photons should be observed as radio noise. Note that Gariaev has observed transformation of laser light scattered from DNA to radio wave photons with frequencies down to 1 kHz at least.
  2. The absence of the neutrons could be understood if they are dark. The absorption cross section is proportional to hbar3 giving a huge amplification factor in the range [109,1012]. This implies that they are absorbed by nuclei coherently in a a volume of order 1.2-12 nm so that an additional amplification factor N2∈ [109,1012] would be obtained. And the capture rate is amplified by a factor in the range [1018,1026]. Effectively this corresponds to the assumption of Widom and Larsen stating that neutrons have ultra-low momentum.
The natural question is why heff us such that the resulting scale as photon wavelength corresponds to energy in scale 10-100 keV. The explanation could relate to the predicted exotic nuclei obtained by replacing some neutral color bonds connecting nucleons with charged ones and exchange of weak boson would affect this replacement. Could the weak physics associated with heff∈ [1012,1014] be associated with dark color bonds? The reported annual variations of the nuclear reaction rates correlating with the distance of Earth from Sun suggest that these variations are induced by solar X rays (see this).

For background see the chapter Nuclear Physics and Condensed Matter.

Could photosensitive emulsions make dark matter visible?

The article "Possible detection of tachyon monopoles in photographic emulsions" by Keith Fredericks describes in detail) very interesting observations by him and also by many other researchers about strange tracks in photographic emulsions induced by various (probably) non-biological mechanisms and also by the exposure to human hands (touching by fingertips) as in the experiments of Fredericks. That the photographic emulsion itself consists of organic matter (say gelatin) might be of significance.

The findings

The tracks have width between 5 μm-110 μm (horizontal) and 5 μm-460 μm (vertical). Even tracks of length up to at least 6.9 cm have been found. Tracks begin at some point and end abruptly. A given track can have both random and almost linear portions, regular periodic structures (figs 11 and 12), tracks can appear in swarms (fig. 24), bundles (fig. 25), and correlated pairs (fig. 16), tracks can also split and recombine (fig. 32) (here and below "fig." refers to a figure of the article .

Tracks differ from tracks of known particles: the constant width of track implies that electrons are not in question. No delta rays (fast electrons caused by secondary ionization appearing as branches in the track) characteristic for ions are present. Unlike alpha particle tracks the tracks are not straight. In magnetic fields tracks have parabolic portions whereas ordinary charged particle move along spiral. The magnetic field needed to cause spiral structure for baryons should be by two orders of magnitude higher than in the experiments.

For particle physicist all these features - for instance constant width - strongly suggest pre-existing structures becoming visible for some reason. The pre-existing structure could of course correspond to something completely standard structures present in the emulsion. If one is ready to accept that biology involves new physics, it could be something more interesting.

Also evidence for cold fusion is reported by the group of Urutskoev. There is evidence for cold fusion in living matter: the fact that the emulsion contains gelatin might relate to this. In Here a dark matter based mechanism of cold fusion allowing protons to overcome the Coulomb wall is discussed. Either dark protons or dark nuclei with much larger quantum size than usually would make this possible and protons could end up to the dark nuclei along dark flux tubes. In TGD inspired biology dark protons (large heff) with scaled up Compton length of order atomic size are proposed to play key role since their states allow interpretation in terms of vertebrate genetic code.

Dark matter in TGD based belief system corresponds to a hierarchy of phases of ordinary matter with an effective value heff of Planck constant coming as integer multiple of ordinary Planck constant. This makes possible macroscopic quantum phases consisting of dark matter. The flux tubes could carry magnetic monopole flux but the magnetic charge would be topological (made possible by the non-trivial second homology of CP2 factor of the 8-D imbedding space containing space-times as surfaces) rather than Dirac type magnetic charge.

The TGD inspired identification of tracks could be as images of magnetic flux tubes or bundles of them containing dark matter defining one of the basic new physics elements in TGD based quantum biology. One can imagine two options for the identification of the tracks as "tracks".

  1. The primary structures are in the photo-sensive emulsion.

  2. The structures in photograph are photographs of dark matter in external world, say structures in human hands or human body or of dark matter at some magnetic body, say at the flux tubes of the magnetic body of the emulsion.

The fact that the tracks have been observed in experimental arrangements not involving exposure to human hands, indeed suggests that tracks represent photographs about parts of the magnetic body assignable to the emulsion. For this option the external source would serve only as the source of possibly dark photons.

This would imply a close analogy with the experiments of Peter Gariaev's group interpreted in TGD framework as photographing of the magnetic body of DNA sample (see this). Also here one has an external source of light: the light would be transformed to dark photons in DNA sample, scatter from the dark charged particles at the flux tubes of the magnetic body of DNA sample, and return back transforming to ordinary light and generating the image in the photosensitive emulsion.

A detailed TGD based proposal for the tracks is discussed in the chapter Dark Forces and Living Matter and in the article Could photosensitive emulsions make dark matter visible?.

To the index page