What's new inHYPERFINITE FACTORS, PADIC LENGTH SCALE HYPOTHESIS, AND DARK MATTER HIERARCHYNote: Newest contributions are at the top! 
Year 2016 
How the transition to superconductive state could be induced by classical radiation?Blog and Facebook discussions have turned out to be extremely useful and quite often new details to the existing picture emerge from them. We have had interesting exchanges with Christoffer Heck in the comment section to the posting Are microtubules macroscopic quantum systems? and this pleasant surprise occurred also now thanks to a question by Christoffer. Recall that Bandyopadhyay's team claims to have detected the analog of superconductivity, when microtubules are subjected to AC voltage (see this). The transition to superconductivity would occur at certain critical frequencies. For references and the TGD inspired model see the article. The TGD proposal for biosuperconductivity  in particular that appearing in microtubules  is same as that for high Tc superconductivity. Quantum criticality,large h_{eff}/h=n phases of of Cooper pairs of electrons and parallel magnetic flux tube pairs carrying the members of Cooper pairs for the essential parts of the mechanism. S=0 (S=1) Cooper pairs appear when the magnetic fields at parallel flux tubes have opposite (same) direction. Cooper pairs would be present already below the gap temperature but possible supercurrents could flow in short loops formed by magnetic flux tubes in ferromagnetic system. AC voltage at critical frequency would somehow induce transition to superconductivity in long length scales by inducing a phase transition of microtubules without helical symmetry to those with helical symmetry and fusing the conduction pathways with length of 13 tubulins to much longer ones by reconnection of magnetic flux tubes parallel to the conduction pathways. The phonon mechanism for the formation of Cooper pair in ordinary superconductivity cannot be however involved with high Tc superconductivity nor biosuperconductivity. There is upper bound of about 30 K for the critical temperature of BCS superconductors. Few days ago I learned about high Tc superconductivity around 500 K for nalkanes (see the blog posting) so that the mechanism for high Tc is certainly different . The question of Christoffer was following. Could microwave radiation for which photon energies are around 10^{5} eV for ordinary value of Planck constant and correspond to the gap energy of BCS superconductivity induce phase transition to BCS superconductivity and maybe to microtubular superconductivity (if it exists at all)? This inspires the question about how precisely the AC voltage at critical frequencies could induce the transition to high Tc and biosuperconductivity. Consider first what could happen in the transition to high Tc superconductivity.
In TGD classical radiation should have also large h_{eff}/h=n photonic counterparts with much larger energies E=h_{eff}×f to explain the quantal effects of ELF radiation at EEG frequency range on brain (see this). The general proposal is that h_{eff} equals to what I have called gravitational Planck constant hbar_{gr}=GMm/v_{0} (see this or this). This implies that dark cyclotron photons have universal energy range having no dependence on the mass of the charged particle. Biophotons have energies in visible and UV range much above thermal energy and would result in the transition transforming dark photons with large h_{eff} = h_{gr} to ordinary photons. One could argue that AC field does not correspond to radiation. In TGD framework this kind of electric fields can be interpreted as analogs of standing waves generated when charged particle has contacts to parallel "massless extremals" representing classical radiation with same frequency propagating in opposite directions. The net force experienced by the particle corresponds to a standing wave. Irradiation using classical fields would be a general mechanism for inducing biosuperconductivity. Superconductivity would be generated when it is needed. The findings of Blackman and other pioneers of bioelectromagnetism about quantal effects of ELF em fields on vertebrate brain stimulated the idea about dark matter as phases with nonstandard value of Planck constant. Also these finding could be interpreted as a generation of superconducting phase by this phase transition. For background see the chapter SuperConductivity in ManySheeted SpaceTime. 
Room temperature superconductivity for alkanesSuper conductivity with critical temperature of 231 C for nalkanes containing n=16 or more carbon atoms in presence of graphite has been reported (see this). Alkanes (see this) can be linear (C_{n}H_{2n+2}) with carbon backbone forming a snake like structure, branched (C_{n}H_{2n+2}, n > 2) in which carbon backbone splits in one, or more directions or cyclic (C_{n}H_{2n}) with carbon backbone forming a loop. Methane CH_{4} is the simplest alkane. What makes the finding so remarkable is that alkanes serve as basic building bricks of organic molecules. For instance, cyclic alkanes modified by replacing some carbon and hydrogen atoms by other atoms or groups form aromatic 5cycles and 6cycles as basic building bricks of DNA. I have proposed that aromatic cycles are superconducting and define fundamental and kind of basic units of molecular consciousness and in case of DNA combine to a larger linear structure. Organic high Tc superconductivity is one of the basic predictions of quantum TGD. The mechanism of superconductivity would be based on Cooper pairs of dark electrons with nonstandard value of Planck constant h_{eff}=n×h implying quantum coherence is length scales scaled up by n (also bosonic ions and Cooper pairs of fermionic ions can be considered). The members of dark Cooper pair would reside at parallel magnetic flux tubes carrying magnetic fields with same or opposite direction: for opposite directions one would have S=0 and for the same direction S=1. The cyclotron energy of electrons proportional to h_{eff} would be scaled up and this would scale up the binding energy of the Cooper pair and make superconductivity possible at temperatures even higher than room temperature (see this). This mechanism would explain the basic qualitative features of high Tc superconductivity in terms of quantum criticality. Between gap temperature and Tc one one would have superconductivity in short scales and below Tc superconductivity in long length scales. These temperatures would correspond to quantum criticality at which large h_{eff} phases would emerge. What could be the role of graphite? The 2D hexagonal structure of graphite is expected to be important as it is also in the ordinary superconductivity: perhaps graphite provides long flux tubes and nalkanes provide the Cooper pairs at them. Either graphite, nalkane as organic compound, or both together could induce quantum criticality. In living matter quantum criticality would be induced by different mechanism. For instance, in microtubules it would be induced by AC current at critical frequencies. See chapter SuperConductivity in ManySheeted SpaceTime and the article New findings about hightemperature superconductors. 
More precise interpretation of gravitational Planck constantThe notion of gravitational Planck constant h_{gr}=GMm/v_{0} was introduced originally by Nottale. In TGD it was interpreted in terms of astrophysical quantum coherence. The interpretation was that h_{gr} characterizes a gravitational flux tube connecting masses M and m and v_{0} is a velocity parameter  some characteristic velocity assignable to the system. It has become clear that a more precise formulation of the rather loose ideas about how gravitational interaction is mediated by flux tubes is needed.
Could the flux sheet covering associated with M_{i} code the value of M_{i} using as unit Planck mass as the number of sheets of this covering? One would have N=M/M_{Pl} sheeted structure with each sheet carrying Planckian flux. The fluxes experienced by the MB of m in turn would consist of sheets carrying fusion n_{m}= M_{Pl}v_{0}/m Planckian fluxes so that the total number of sheets would be reduced to n= N/n_{m}= GMm/v_{0} sheets. Why this kind of fusion of Planck fluxes to larger fluxes should happen? Could quantum information theory provide clues here? And why v_{0} is involved? See the chapter Criticality and dark matter. 
Could Pollack effect make cell membrane a selfloading battery?Elemer Rosinger had a Facebook link to an article telling about Clarendon dry pile, a very longlived battery providing energy for an electric clock (see this, this, and this ). This clock known also as Oxford bell has been ringing for 175 years now and the article suggests that the longevity of the battery is not really understood. The bell is not actually ringing so loud that human ear could hear it but one can see the motion of the small metal sphere between the oppositely charged electrodes of the battery in the video. The principle of the clock is simple. The gravitational field of earth is also present. When the sphere touches the negative electrode, it receives a bunch of electrons and gives the bunch away as it touches positive electrode so that a current consisting of these bunches is running between electrons. The average current during the oscillation period of 2 seconds is nanoampere so that nanocoulomb of charge is transferred during each period (Coulomb corresponds to a 6.242 × 10^{18} elementary charges (electrons)). The dry pile was discovered by priest and physicist Giuseppe Zamboni at 1812. The pile consists of 2,000 pairs of pairs of discs of tin foil glued to paper impregnated with Zinc sulphate and coated on the other side with manganese dioxide: 2,000 thin batteries in series. The operation of battery gradually leads to the oxidation of Zinc and the loss of manganese dioxide but the process takes place very slowly. One might actually wonder whether it takes place too slowly so that some other source of energy than the electrostatic energy of the battery would be keep the clock running. Karpen pile is analogous battery discover by Vasily Karpen. It has now worked for 50 years. Cold fusion is associated with electrolysis. Could the functioning of this mystery clock involve cold fusion taken seriously even by American Physical Society thanks to the work of the group of prof. Holmlid. Electrolytes have of course been "understood" for aeons. Ionization leads to charge separation and current flows in the resulting voltage. With a feeling of deep shame I must confess that I cannot understand how the ionization is possible in standard physics. This of course might be just my immense stupidity  every second year physics student would immediately tell that this is "trivial"  so trivial that he would not even bother to explain why. The electric field between the electrodes is immensely weak in the scale of molecules. How can it induce the ionisation? Could ordinary electrolytes involve new physics involving cold fusion liberating energy? These are the questions, which pop up in my stupid mind. Stubborn as I am in my delusions, I have proposed what this new physics might be with inspiration coming from strange experimental findings of Gerald Pollack, cold fusion, and my own view about dark matter has phases of ordinary matter with nonstandard value h_{eff}=n× h of Planck constant. Continuing with my weird delusions I dare ask: Could cold fusion provide the energy for the "miracle" battery? To understand what might be involved one must first learn some basic concepts. I am trying to do the same.
The principle of the clock is simple. The gravitational field of earth is also present. When the sphere touches the negative electrode, it receives a bunch of electrons and gives the bunch away as it touches positive electrode so that a current consisting of these bunches is running between electrons. The average current during the oscillation period of 2 seconds is nanoampere so that nanocoulomb of charge is transferred during each period (Coulomb corresponds to a 6.242 × 10^{18} elementary charges (electrons)). The dry pile was discovered by priest and physicist Giuseppe Zamboni at 1812. The pile consists of 2,000 pairs of pairs of discs of tin foil glued to paper impregnated with Zinc sulphate and coated on the other side with manganese dioxide: 2,000 thin batteries in series. The operation of battery gradually leads to the oxidation of Zinc and the loss of manganese dioxide but the process takes place very slowly. One might actually wonder whether it takes place too slowly so that some other source of energy than the electrostatic energy of the battery would be keep the clock running. Karpen pile is analogous battery discover by Vasily Karpen. It has now worked for 50 years. Cold fusion is associated with electrolysis. Could the functioning of this mystery clock involve cold fusion taken seriously even by American Physical Society thanks to the work of the group of prof. Holmlid. Electrolytes have of course been "understood" for aeons. Ionization leads to charge separation and current flows in the resulting voltage. With a feeling of deep shame I must confess that I cannot understand how the ionization is possible in standard physics. This of course might be just my immense stupidity  every second year physics student would immediately tell that this is "trivial"  so trivial that he would not even bother to explain why. The electric field between the electrodes is immensely weak in the scale of molecules. How can it induce the ionisation? Could ordinary electrolytes involve new physics involving cold fusion liberating energy? These are the questions, which pop up in my stupid mind. Stubborn as I am in my delusions, I have proposed what this new physics might be with inspiration coming from strange experimental findings of Gerald Pollack, cold fusion, and my own view about dark matter has phases of ordinary matter with nonstandard value h_{eff}=n× h of Planck constant. Continuing with my weird delusions I dare ask: Could cold fusion provide the energy for the "miracle" battery? To understand what might be involved one must first learn some basic concepts. I am trying to do the same.
See the chapter Cold fusion again. See also the article Could Pollack effect make cell membrane a selfloading battery? . 
ER=EPR and TGDER=EPR correspondence proposed by Leonard Susskind and Juan Maldacena in 2014 (see also this) has become the most fashionable fashion in theoretical physics. Even the idea that spacetime could emerge from EREPR has been proposed. ER ER (EinsteinRosen) bridge in turn is purely classical notion associated with general relativity theory (GRT). ER bridge is illustrated in terms of a fold of spacetime. Locally there are two sheets near to each other and connected by a wormhole: these sheets are actually parts of the same sheet. Along the bridge the distance between two systems can be very short. Along folded sheet it can be very long. This suggest some kind of classical nonlocality in the sense that the physics around the two throats of wormhole can be strongly correlated: the nonlocality would be implied by topology. This is not in accordance with the view of classical physics in Minkowski spacetime. EPR EPR (EinsteinPodolskyRosen) paradox states that it is possible to measure both position and momentum of two particles more accurately than Heisenberg Uncertainty Principle allows unless the measurement involves instantaneous transfer of information between particles denied by special relativity. The conclusion of EPR was that quantum theory is incomplete and should be extended by introducing hidden variables. The argument was based on classical physics view a bout microcausality. Later the notion of quantum entanglement became an established notion and it became clear that no classical superluminal transfer of information is needed. If one accepts the basic rules of quantum measurement theory  in particular tensor products of distant systems  EPR paradox disappears. Entanglement is of course a genuinely nonnonlocal phenomenon not encountered in classical physics and one could wonder whether it might have classical sacetime correlate after all. State function reduction becomes the problem and has remained the ugly duckling of quantum theory. Unfortunately, this ugly duckling has become a taboo and is surrounded by a thick cloud of messy interpretations. Hence the situation is still far from settled. At time EPR and ER were proposed, there was no idea about possible connection between these two ideas. Both notions involve unexpected nonlocality and one might however ask whether there might be a connection. EREPR In some sense ER=EPR could be seen as kind of victory for Einstein. There could be after all a classical spacetime correlate for entanglement and for what happens state function reduction for a system induces state function reduction in distant entangled system. It however seems that quantum theory does not allow a signal travelling along the wormhole throat connecting the entangled systems. What ER= EPR says that maximal entanglement for blackholes is somehow dual to EinsteinRosen bridge (wormhole). Susskind and Maldacena even suggests that this picture generalizes to entanglement between any kind of systems and that even elementary particles are connected by Planckian wormholes. The next step has been to argue that entanglement is more fundamental than spacetime, and that spacetime would emerge. The attempts to realize the idea involve holography and already this means introduction of 2D surfaces in 3D space so that the argument becomes circular. To my opinion the emergence of spacetime is doomed to remain one of the many fashions of theoretical physics, which last few years and are then lost to sands of time. These fashions reflect the deep crisis of theoretical physics, which has lasted for four decades, and are as such a good sign telling that people at least try. The motivation for following TGD inspired arguments was one of the arguments against ER=EPR: ER=EPR does not conform with the linearity of quantum mechanics. The state pairs in the superposition defining entangled state are unentangled (separable) and there should be no wormhole connecting the systems in this case. In an entangled state there should be wormhole. This makes sense only if the spacetime geometry couples to quantum dynamics so that one must give up the idea that one has Schödinger amplitudes in fixed background and linear superposition for them. This looks weird even in GRT spacetime. Some background about TGD Before discussing what EREPR corresponds in TGD few words about quantum TGD are in order.
The counterpart of ER=EPR in TGD framework The TGD variant of ER=EPR has been part of TGD for two decades but have remained unnoticed since superstring hegemony has dominated the theory landscape. There are still many profound ideas to be rediscovered but their realization in the framework of GRT is practically impossible since they relate closely the vision about spacetimes as 4surfaces in M^{4}× CP_{2}. What ER=EPR then corresponds in TGD.
See the chapter Negentropy Maximization Principle. See also the article ER=EPR and TGD. 
Cloning of maximally negentropic states is possible: DNA replication as cloning of this kind of states?In Facebook discussion with Bruno Marchal and Stephen King the notion of quantum cloning as copying of quantum state popped up and I ended up to ask about approximate cloning and got a nice link about which more below. From Wikipedia one learns some interesting facts cloning. Nocloning theorem states that the cloning of all states by unitary time evolution of the tensor product system is not possible. It is however possible clone orthogonal basis of states. Does this have some deep meaning? As a response to my question I got a link to an article of Lamourex et al showing that cloning of entanglement  to be distinguished from the cloning of quantum state  is not possible in the general case. Separability  the absence of entanglement  is not preserved. Approximate cloning generates necessarily some entanglement in this case, and the authors give a lower bound for the remaining entanglement in case of an unentangled state pair. The cloning of maximally entangled state is however possible. What makes this so interesting is that maximally negentropic entanglement for rational entanglement probabilities in TGD framework corresponds to maximal entanglement  entanglement probabilities form a matrix proportional to unit matrix and just this entanglement is favored by Negentropy Maximization Principle . Could maximal entanglement be involved with say DNA replication? Could maximal negentropic entanglement for algebraic extensions of rationals allow cloning so that DNA entanglement negentropy could be larger than entanglement entropy? What about entanglement probabilities in algebraic extension of rationals? In this case real number based entanglement entropy is not maximal since entanglement probablities are different. What can one say about padic entanglement negentropies: are they still maximal under some reasonable conditions? The logarithms involved depend on padic norms of probabilities and this is in the generic case just inverse of the power of p. Number theoretical universality suggests that entanglement probabilities are of form P_{i}= a_{i}/N with ∑ a_{i}= N with algebraic numbers a_{i} not involving natural numbers and thus having unit padic norm. With this assumption padic norms of P_{i} reduce to those of 1/N as for maximal rational entanglement. If this is the case the padic negentropy equals to log(p^{k}) if p^{k} divides N. The total negentropy equals to log(N) and is maximal and has the same value as for rational probabilities equal to 1/N. The real entanglement entropy is now however smaller than log(N), which would mean that padic negentropy is larger than the real entropy as conjectured earlier (see this). For rational entanglement probabilities the generation of entanglement negentropy  conscious information during evolution  would be accompanied by a generation of equal entanglement entropy measuring the ignorance about what the negentropically entangled states representing selves are. This conforms with the observation of Jeremy England that living matter is entropy producer (for TGD inspired commentary see this). For algebraic extensions of rationals this entropy could be however smaller than the total negentropy. Second law follows as a shadow of NMP if the real entanglement entropy corresponds to the thermodynamical entropy. Algebraic evolution would allow to generate conscious information faster than the environment is polluted, one might concretize! The higher the dimension of the algebraic extension rationals, the larger the difference could be and the future of the Universe might be brighter than one might expect by just looking around! Very consolating! One should however show that the above described situation can be realized as NMP strongly suggests before opening a bottle of champaigne. The impossibility of cloning of entanglement in the general case makes impossible the transfer of information as any kind of entanglement. Maximal entanglement  and maybe be even negentropic entanglement maximal in padic sectors  could however make the communication without damaging the information at the source. Since conscious information is associated with padic sectors responsible for cognition, one could even allow the modification of the entanglement probabilities and thus of the real entanglement entropy in the communication process since the maximal padic negentropy depends only weakly on the entanglement probabilities. NE is assigned with conscious experiences with positive emotional coloring: experience of understanding, experience of love, etc... There is an old finnish saying, which can be translated to "Shared joy is double joy!". Could the cloning of NE make possible generation of entanglement by loving attitude so that living entities would not be mere thieves trying to steal NE by killing and eating each other? For background see the chapter Negentropy Maximization Principle. See also the article Is the sum of padic negentropies equal to real entropy?. 
Wigner's friend and Schrödinger's catI encountered in Facebook discussion Wigner's friend paradox (see this and this). Wigner leaves his friend to the laboratory together with Schrödinger's cat and the friend measures the state of cat: the outcome is "dead" or "alive". Wigner returns and learns from his friend what the state of the cat is. The question is: was the state of cat fixed already earlier or when Wigner learned it from his friend. In the latter case the state of friend and cat would have been superposition of pairs in which cat was alive and friend new this and cat was dead also now friend new this. Entanglement between cat and bottle would have been transferred to that between cat+bottle and Wigner's friend. Recall that this kind of information transfer occur in quantum computation and quantum teleportation allows to transfer arbitrary quantum state but destroys the original. The original purpose of Wigner was to demonstrate that consciousness is involved with the state function collapse. TGD view is that the state function collapse can be seen as moment consciousness. Or more precisely, self as conscious entity corresponds to the repeated state function reduction sequence to the same boundary of causal diamond (CD). One might say that self is generalized Zeno effect in Zero Energy Ontology (ZEO). The first reduction to the opposite boundary of CD means death of self and reincarnation at opposite boundary as time reversed self. The experiencet flow of time corresponds to the shift of the nonfixed boundary of self reduction by reduction farther from the fixed boundary  also the state at it changes. Thus subjective time as sequence of reductions is mapped to clock time identifiable as the temporal distance between the tips of CD. Arrow of time is generated but changes in deathreincarnation. In TGD inspired theory of consciousness the intuitive answerto the question of Wigner looks obvious. If the friend measured the state of cat, it was indeed dead or alive already before Wigner arrived. What remains is the question what it means for Wigner, the "ultimate observer", to learn about the state of the cat from his friend. The question is about what conscious communications are. Consider first the situation in the framework of standard quantum information theory.
TGD inspired theory of consciousness predicts that during communication Wigner and his friend form a larger entangled system: this makes possible sharing of meaning. Directed attention means that subject and object are entangled. The magnetic flux tubes connecting the two systems would serve as a correlate for the attention. This mechanism would be at work already at the level of molecular biology. Its analog would be wormholes in EREPR corresponence proposed by Maldacena and Susskind. Note that directed attention brings in mind the generation of the Bell entangled pair AB. It would make also possible quantum teleportation. Wigner's friend could also symbolize the "pointer of the measurement apparatus" constructed to detect whether cats are dead of alive. Consider this option first. If the pointer is subsystem defining subself of Wigner, it would represent mental image of Wigner and there would be no paradox. If qubit in the brain in the brain of Wigner's friend replaces the pointer of measurement apparatus then during communication Wigner and his friend form a larger entangled system experiencing this qubit. Perhaps this temporary fusion of selves allows to answer the question about how common meaning is generated. Note that this would not require quantum teleportation protocol but would allow it. For background see the chapter Negentropy Maximization Principle.

Eigenstates of Yangian coalgebra generators as a manner to generate maximal entanglement?Negentropically entangled objects are key entities in TGD inspired theory of consciousness and in the construction of tensor networks and the challenge is to understand how these could be constructed and what their properties could be. These states are diametrically opposite to unentangled eigenstates of single particle operators, usually elements of Cartan algebra of symmetry group. The entangled states should result as eigenstates of polylocal operators. Yangian algebras involve a hierarchy of polylocal operators, and twistorial considerations inspire the conjecture that Yangian counterparts of supersymplectic and other algebras made polylocal with respect to partonic 2surfaces or endpoints of boundaries of string world sheet at them are symmetries of quantum TGD. Could Yangians allow to understand maximal entanglement in terms of symmetries?
For details see the chapter Holography and Quantum Error Correcting Codes: TGD View or the article with the same title. 
Do magnetic monopoles exist?LNC scientist report that they have discovered magnetic monopoles (see this and this). The claim that free monopoles are discovered is to my opinion too strong, at least in TGD framework. TGD allows monopole fluxes but no free monopoles. Wormhole throats however behave effectively like monopoles when looked at either spacetime sheet, A or B. The first TGD explanation that comes in mind is in terms of 2sheeted structures with wormhole contacts at the ends and monopole flux tubes connecting the wormhole throats at A and B so that closed monopole flux is the outcome. All elementary particles are predicted to be this kind of structures in the scale of Compton length. First wormhole carries throat carries the elementary particle quantum numbers and second throat neutrino pair neutralizing the weak isospin so that weak interaction is finite ranged. Compton length scales like h_{eff} and can be nanoscopic or even large for large values of h_{eff}. Also for abnormally large padic length scale implying different mass scale for the particle, the size scale increases. How to explain the observations? Throats with opposite apparent quantized magnetic charges at given spacetime sheet should move effectively like independent particles (although connected by flux tube) in opposite directions to give rise to an effective monopole current accompanied by an opposite current at the other spacetime sheet. This is like having balls at the ends of very soft strings at the two sheets. One must assume that only the current only at single sheet is detected. It is mentioned that ohmic component corresponds to effectively free monopoles (already having long flux tubes connecting throats with small magnetic string tension). In strong magnetic fields shorter pairs of monopoles are reported to become "ionised" and give rise to a current increasing exponentially as function of square root of external magnetic field strength. This could correspond to a phase transition increasing h_{eff} with no change in particle mass. This would increase the length of monopole flux tube and the throats would be effectively free magnetic charges in much longer Compton scale. The spacetime sheet at which the throat carrying the quantum numbers of fermion is preferred in the case of elementary fermions. The analog of color deconfinement comes in mind and one cannot exclude color force since nonvanishing Kähler field is necessarily accompanied by nonvanishing classical color gauge fields. Effectively free motion below the length scale of wormhole contact would correspond to asymtotic freedom. Amusingly, one would have zoomed up representation of dynamics of colored objects! One can also consider interpretation in terms of Kähler monopoles: induced Kähler form corresponds to classical electroweak U(1) field coupling to weak hypercharge but asymptotic freedom need not fit with this interpretation. Induced gauge fields are however strongly constrained: the components of color gauge fields are proportional to Hamiltonians of color rotation and induced Kähler form. Hence it is difficult to draw any conclusions. See the chapter Criticality and dark matter. 
Pearshaped Barium nucleus as evidence for large parity breaking effects in nuclear scalesPieces of evidence for nuclear physics anomalies continue to accumulate. Now there was a popular article telling about the discovery of large parity breaking in nuclear physics scale. What have been observed is pearshaped ^{144}Ba nucleus not invariant under spatial reflection. The arXiv article speaks only about octupole moment of Barium nucleus difficult to explain using existing models. Therefore one must take the popular article managing to associate the impossibility of time travel to the unexpectedly large octupole moment with some caution. As a matter fact, pearshapedness has been reported earlier for Radon220 and Radium224 nuclei by ISOLDE collaboration working at CERN (see this and this). The popular article could have been formulated without any reference to time travel: the finding could be spectacular even without mentioning the time travel. There are three basic discrete symmetries: C,P, T and their combinations. CPT is belived to be unbroken but C,P, CP and T are known to be broken in particle physics. In hadron and nuclear physics scales the breaking of parity symmetry P should be very small since weak bosons break it and define so short scaled interaction: this breaking has been observed. The possible big news is following: pearshaped state of heavy nucleus suggests that the breaking of P in nuclear physics is (much?) stronger than expected. With parity breaking one would expect ellipsoid with vanishing octupole moment but with nonvanishing quadrupole moment. This suggests parity breaking in unexpectedly long length scale. This is not possible in standard model where parity breaking is larger only in weak scale which is roughly 1/1000 of nuclear scale and fourth power of this factor reduces the weak parity breaking effects in nuclear scale. Does this finding force to forget the plans for the next summer's time travel? If parity breaking is large, one expects from the conservation of CPT also large compensating breaking of CT breaking. This might relate to the matterantimatter asymmetry of the observed Universe and I cannot relate it to time travel since the very idea of time travel in its standard form does not make much much sense to me. In TGD framework one can imagine two explanations involving large parity breaking in unexpectedly long scales. In fact, in living matter chiral selection represents mysteriously large parity breaking effect and the proposed mechanisms could be behind it.
See the chapter Nuclear string hypothesis. 
Lightnings, dark matter, and leptopion hypothesis againLightnings have been found to involve phenomena difficult to understand in the framework of standard physics. Very high energy photons, even gamma rays and electrons and positrons with energies in gamma energy range, have been observed. I learned recently about even more mysterious looking discovery (see this). Physicist Joseph Dwyer from University of New Hampshire and lightning scientists from the University of California at Santa Cruz and Florida Tech describe this discovery in a paper to be published in the Journal of Plasma Physics. In August 2009, Dwyer and colleagues were aboard a National Center for Atmospheric Research Gulfstream V when it inadvertently flew into the extremely violent thunderstorm  and, it turned out, through a large cloud of positrons, the antimatter opposite of electrons, that should not have been there. One would have expected that positrons would have been produced by annihilation of highly energetic gamma rays with energy above .5 MeV but no gamma rays were detected. This looks rather mysterious from standard physics point of view. There are also earlier strange discoveries related to lightnings.

Magnetic body, biophotons, and prediction of scaled variant of EEGThe model for quantum biology relying on the notions of MB and dark matter as hierarchy of phases with h_{eff} =nh, and biophotons identified as decay produces of dark photons. The assumption h_{gr} ∝ m becomes highly predictable since cyclotron frequencies would be independent of the mass of the ion.
The problem is following. If one wants biophoton spectrum to be in visibleUV range assuming that biophotons correspond to cyclotron photons, one must reduce the value of r=h_{gr}B_{end}/mv_{0} for Earth particle system by a factor of order k=2× 10^{4}. r does not depend on the mass of the charged particle. One can replace B_{end} with some other magnetic field having value which is considerably smaller. One can also increase the value of v_{0}.
See the new chapter Quantum criticality and dark matter or the article Nonlocality in quantum theory, in biology and neuroscience, and in remote mental interactions: TGD perspective. 
Comparing TGD view about quantum biology with McFadden's views
McFadden has very original view about quantum biology: I have written about his work for the first time for years ago, much before the emergence of ZEO, of the recent view about self as generalized Zeno effect, and of the understanding the role of magnetic body containing dark matter (see this). The pleasant surprise was that I now understand McFadden's views much better from TGD viewpoint.
The NE between dark codons could also have a useful function: it could determine physically gene as a union of disjoint mutually entangled portions of DNA. Genes are known to be highly dynamical units, and after pretranscription splicing selects the portions of the transcript translated to protein. The codons in the complement of the real transcript are called introns and are spliced out from mRNA after the pretranscription (see this). What could be the physical criterion telling whether a given codon belongs to exonic or intronic portion of DNA? A possible criterion distinguish between exons and introns is that exons have NE between themselves and introns have no entanglement with exons (also exons could have NE between themselves). Introns would not be useless trash since the division into exonic and exonic region would be dynamical. The interpretation in terms of TGD inspired theory of consciousness is that exons correspond to single self. An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear string model and in the article About physical representations of genetic code in terms of dark nuclear strings. 
Is biocatalysis a shadow of dark biocatalysis based on generalization of genetic code?
Protein catalysis and reaction pathways look extremely complex (see this) as compared to replication, transcription, translation, and DNA repair. Could simplicity emerge if biomolecules are identified as chemical shadows of objects formed from dark nuclear strings consisting of dark nucleon triplets and their dynamics is shadow of dark stringy dynamics very much analogous to text processing? What if biocatalysis is induced by dark catalysis based on reconnection as recognition mechanism? What if contractions and expansions of Ushaped flux tubes by h_{eff} increasing phase transitions take that reactants find each other and change conformations as in the case of opening of DNA double strand? What if codes allowing only the dark nucleons with same dark nuclear spin and flux tubes spin to be connected by a pair of flux tubes? This speculation might make sense! The recognition of reactants is one part of catalytic action. It has been found in vitro RNA selection experiments that RNA sequences are produced having high frequency for the codons which code for the aminoacid that these RNA molecules recognize (this). This is just what the proposal predicts! Genetic codes DNA to RNA as 64→ 64 map, RNA to tRNA as 64→ 40, tRNA to aminoacids with 40→ 20 map are certainly not enough. One can however consider also additional codes allowed by projections of (4⊕ 2_{1}⊕ 2_{2}) ⊗ (5⊕ 3 (⊕ 1)) to lowerdimensional subspaces defined by projections preserving spins. One could also visualize biomolecules as collections of pieces of text attaching to each other along conjugate texts. The properties of catalysts and reactants would also depend by what texts are "visible" to the catalysts. Could the most important biomolecules participating biochemical reactions (proteins, nucleic acids, carbohydrates, lipids, primary and secondary metabolites, and natural products, see this) have dark counterparts in these subspaces. The selection of bioactive molecules is one of the big mysteries of biology. The model for the chemical pathway leading to the selection of purines as nucleotides (see this) assumes that the predecessor of purine molecule can bind to dark proton without transforming it to ordinary proton. A possible explanation is that the binding energy of the resulting bound state is higher for dark proton than the ordinary one. Minimization of the bound state energy could be a completely general criterion dictating which bioactive molecules can pair with dark protons. The selection of bioactive molecules would not be random after all although it looks so. The proposal for DNAnuclear/cell membrane as topological quantum computer with quantum computations coded by the braiding of magnetic flux tubes connecting nucleotides to the lipids wlead to the idea that flux tubes being at O=bonds (see this). An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear string model and in the article About physical representations of genetic code in terms of dark nuclear strings. 
Are soundlike bubbles whizzing around in DNA are essential to life?
I got a link to a very interesting article about sound waves in DNA (see this). The article tells about THz delocalized modes claimed to propagate forth and back along DNA double strand somewhat like bullets. These modes involve collective motion of many atoms. These modes are interpreted as a change in the stiffness of the DNA double strand leading to the splitting of hydrogen bonds in turn leading to a splitting into single strands. The resulting gap is known as transcriptional bubble propagating along double strand is the outcome. I do not how sound the interpretation as sound wave is. It has been proposed that sound waves along DNA give rise to the bubble. The local physical properties of DNA double strand such as helical structure and elasticity affect the propagation of the waves. Specific local sequences are proposed to favor a resonance with low frequency vibrational modes, promoting the temperary splitting of the DNA double strand. Inside the bubble the bases are exposed to the surrounding solvent, which has two effects. Bubbles expose the nucleic acid to reactions of the bases with mutagens in the environment whereas so called molecular intercalators may insert themselves between the strands of DNA. On the other hand, bubbles allow proteins known as helicases to attach to DNA to stabilize the bubble, followed by the splitting the strands to start the transcription and replication process. The splitting would occur at certain portions of DNA double strand. For this reason, it is believed that DNA directs its own transcription. The problem is that the strong interactions with the surrounding water are expected to damp the sound wave very rapidly. Authors study experimentally the situation and report that propagating bubbles indeed exist for frequencies in few THz region. Therefore the damping deo not seem to be effective. How this is possible? As an innocent layman I also wonder how this kind of mechanism can be selective: it would seem that the bullet like sound wave initiates transcription at many positions along DNA. The transcription should be localized to a region assignable to single gene. What could guarantee this? Can TGD say anything interesting about the mechanism behind transcription and replication?

Could dark DNA, RNA, tRNA and aminoacids correspond to different charge states of codons?
If dark codons correspond to dark nucleon triplets as assumed in the following considerations there are 4 basic types of dark nucleon triplets: ppp,ppn, pnn, nnn. Also dark nucleons could represent codons as uuu,uud,udd,ddd: the following discussion generalizes as such also to this case. If strong isospin/em charge decouples from spin the spin content is same independently of the nucleon content. One can consider the possibility of charge neuralization by the charges assignable to color flux tubes but this is not necessarily. In any case, one would have 4 types of nucleon triplets depending on the values of total charges. Could different dark nucleon total charges correspond to DNA,RNA, tRNA and aminoacids? Already the group representation content  perhaps correlating with quark charges  could allow to distinguish between DNA, RNA, tRNA, and aminoacids. For aminoacids one would have only 4× 5 and ordinary statistics and color singlets. For DNA and RNA one would have full multiplet also color nonsinglets and for tRNA one could consider (4⊕ 2_{1}⊕ 2_{2})× 5 containing 40 states. 31 is the minimum number of tRNAs for the realization of the genetic code. The number of tRNA molecules is known to be between 3040 in bacterial cells. The number is larger in animal cells but this could be due to different chemical representations of dark tRNA codons. If the net charge of dark codon distinguishes between DNA,RNA, tRNA, and aminoacid sequences, the natural hypothesis to be tested is that dark ppp, ppn, pnn, and nnn sequences are accompanied by DNA,RNA, tRNA, and aminoacid sequences. The dark beta decays of dark protons proposed to play essential role in the model of cold fusion could transform dark protons to dark neurons. Peptide backbones are neutral so that dark nnn sequence could be also absent but the dark nnn option is more natural if the general vision is accepted. Is this picture consistent with what is known about charges of aminoacids DNA,RNA, tRNA, and aminoacids?

About physical representations of genetic code in terms of dark nuclear strings
The standard view about evolution as a random process suggests that genetic code is pure accident. My own view is that something so fundamental as life cannot be based on pure randomness. TGD has led to several proposals for genetic code, its emergence, and various realizations based on purely mathematical considerations or inspired by physical ideas. One can argue that genetic code is realized in several manners just like bits can be represented in very many manners. Two especially interesting proposals have emerged. The first one is based on geometric model of music harmony involving icosahedral and tetrahedral geometries. Second one having two variants is based on dark nuclear strings. Both models predict correctly the numbers of DNA codons coding for a given aminoacid. An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear String Model and in the article About physical representations of genetic code in terms of dark nuclear strings. 
Is the sum of padic negentropies equal to real entropy?I ended almost by accident to a fascinating and almost trivial theorem. Adelic theorem for information would state that conscious information represented as sum of padic negentropies (entropies, which are negative) is equal to real entropy. The more conscious information, the larger the chaos in the environment as everyone can verify by just looking around;) This looks bad! Luckily, it turned out that this statement is true for rational probabilities only. For algebraic extensions it cannot be true as is easy to see. That negentropic entanglement is possible only for algebraic extensions of rationals conforms with the vision that algebraic extensions of rationals characterize evolutionary hierarchy. The rationals represent the lowest level at which there either conscious information vanishes or if equal to padic contribution to negentropy is companied by equally large real entropy. It is not completely obvious that the notion of padic negentropy indeed makes sense for algebraic extensions of rationals. A possible problem is caused by the fact that the decomposition of algebraic integer to primes is not unique. Simple argument however strongly suggests that the various padic norms of the factors do not depend on the factorization. Also a formula for the difference of the total padic negentropy and real entropy is deduced. pAdic contribution to negentropy equals to real entropy for rational probabilities but not for algebraic probabilities The following argument shows that padic negentropy equals to real entropy for rational probabilities.
Formula for the difference of total padic negentropy and real entanglement entropy In the following some nontrivial details related to the definition of padic norms for the rationals in the algebraic extension of rationals are discussed. The induced padic norm N_{p}(x) for ndimensional extension of Q is defined as the determinant det(x) of the linear map defined by multiplication with x. det(x) is rational number. The corresponding padic norm is defined as the n:th root N_{p}(det(x))^{1/n} of the ordinary padic norm. Root guarantees that the norm coincides with the ordinary padic norm for ordinary padic integers. One must perform now a factorization to algebraic primes. Below an argument is given that although the factorization to primes is not always unique, the product of padic norms for given algebraic rational defined as ratio of algebraic integers is unique. Can one write an explicit formula the difference of total padic entanglement negentropy (positive) and real entanglement entropy using prime factorization in finite dimensional algebraic extension (note that for algebraic numbers defining infinitedimensional extension of rationals factorization does not even exist since one can write a=a^{1/2}a^{1/2}=...)? This requires that total padic entropy is uniquely defined. There is a possible problem due to the nonuniqueness of the prime factorization.

X boson as evidence for nuclear string modelAnomalies seem to be popping up everywhere, also in nuclear physics and I have been busily explaining them in the framework provided by TGD. The latest nuclear physics anomaly that I have encountered was discovered in Hungarian physics laboratory in the decays of the excited state ^{8}Be* of an unstable isotope of ^{8}Be (4 protons and 4 neutrons) to ground state ^{8}Be (see this). For the theoretical interpretation of the finding in terms of fifth force mediated by spin 1 boson see this. The anomaly manifests itself as a bump in the distribution of e^{+}e^{} pairs in the transitions ^{8}Be*→ ^{8}Be at certain angle between electrons. The theoretical interpretation is in terms of a production of spin 1 boson  christened as X  identified as a carrier of fifth force with range about 12 fm, nuclear length scale. The attribute 6.8σ tells that the probably that the finding is statistical fluctuation is about 10^{12}: already 5 sigma is regarded as a criterion for discovery. The assumption about vector boson character looks at first wellmotivated: the experimental constraints for the rate to gamma pairs eliminate the interpretation as pseudoscalar boson whereas spin 1 bosons do not have these decays. In the standard reductionistic spirit it is assumed that X couples to p and n and the coupling is sum for direct couplings to u and d quarks making proton and neutron. The comparison with the experimental constraints forces the coupling to proton to be very small: this is called protophoby. Perhaps it signifies something that many of the exotic particles introduced to explain some bump during last years are assumed to suffer various kinds of phobies. The assumption that X couples directly to quarks and therefore to nucleons is of course wellmotivated in standard nuclear physics framework relying on reductionism. Two observations, and the problems created by them TGD inspired interpretation based on nuclear string model is based on two observations and potential problems created by them.
There is however a problem: the estimate for Γ(π, e^{+}e^{}) obtained by padically scaling the model based on decay virtual gamma pair decaying to e^{+}e^{} pair is by a factor 1/88 too low. One can consider the possibility that the dependence of f_{π} on padic length scale is not the naively expected one but this is not an attractive option. The increase of Planck constant seems to worsen the situation. The dark variants of weak bosons appear in important role in both cold fusion and TGD inspired model for chiral selection. They are effectively massless below the scaled up Compton scale of weak bosons so that weak interactions become strong. Since pion couples to axial current, the decay to e^{+}e^{} could proceed via annihilation to Z^{0} boson decay to e^{+}e^{} pair. The estimate for Γ(π(113), e^{+}e^{}) is in the middle of allowed range. The success suggests that the couplings of mesons to padically scaled down weak bosons could describe semileptonic decays of hadrons and explain the somewhat mysterious origin of CVC and PCAC. Effective action approach One must construct the effective action for the process using relativistic approach and Poincare invariance.
Model for color bonds of nuclear strings One should also construct a model for color bonds connecting nucleons to nuclear strings.
Model for ^{8}Be* → ^{8}Be +X With these ingredients one can construct a model for the decay ^{8}Be* → ^{8}Be +X.
For details see the article X boson as evidence for nuclear string model or the chapter Nuclear string model. 
Badly behaving photons and spacetime as 4surfaceThere was a very interesting popular article with title Light Behaving Badly: Strange Beams Reveal Hitch in Quantum Mechanics. The article told about a discovery made by a group of physicists at Trinity College Dublin in Ireland in the study of helical lightbeams with conical geometry. These light beams are hollow and have the axis of helix as a symmetry axis. The surprising finding was that according to various experimental criteria one can say that photons have spin S=+//1/2 with respect to the rotations around the axis of the helix. The first guess would be that this is due to the fact that rotational symmetry for the spiral conical beam is broken to axial rotational symmetry around the beam axis. This makes the situation 2dimensional. In D=2 one can have braid statistics allowing fractional angular momentum for the rotations around a hole  now the hollow interior of the beam. One can however counter argue that photons with half odd integer braid spin should obey Fermi statistics. This would mean that only one photon with fixed spin is possible in the beam. Something seems to go wrong with the naive argument. It would seem that the exchange of photons does not seem to correspond to 2π rotation as a homotopy would be the topological manner to state the problem. The authors of the article suggest that besides the ordinary conserved angular momentum one can identify also second conserved angular momentum like operator.
In TGD framework this question relates interestingly to the assumption that spacetime is 4surface in M^{4}× CP_{2}. Could X^{4} and M^{4} correspond to the two loci for the action of rotations? One can indeed have two kinds of photons. Photons can correspond to spacetime sheets in M^{4}× CP_{2} or they can correspond to spacetime sheets topologically condensed to spacetime surface X^{4}⊂ M^{4}× CP_{2}. For the first option one would have ordinary quantization of angular momentum in M^{4}. For the second option quantization in X^{4} angular momentum, which using the units of M^{4} angular momentum could correspond to halfinteger or even more general quantization.
For details see the article Badly behaving photons and spacetime as 4surface or the chapter Criticality and dark matter. 
Could the replication of mirror DNA teach something about chiral selection?I received a link to a very interesting popular article from which I learned that short strands of mirror DNA and mirror RNA  known as aptamers  have been be produced commercially for decades  a total surprise to me. Aptamers bind to targets like proteins and block their activity and this ability can be utilized for medical purposes. Now researchers at Tsinghua University of Beijing have been able to create a mirror variant of an enzyme  DNA polymeraze  catalyzing the transcription of mirror DNA to mirror RNA also replication of mirror DNA. What is needed are the DNA strand to be replicated or transcribed, the mirror DNA nucleotides, and short primer strand since the DNA polymeraze starts to work only if the primer is present. This is like recalling a poem only after hearing the first few words. The commonly used DNA polymerase containing about 600 aminoadics is too long to be built up as a righthanded version and researchers used a much shorter version: African swine fever virus having only 174 aminoacids. The replication turned out to be very slow. A primer of 12 nucleotides was extended to a strand of 18 nucleotides in about 4 hours: 3/2 nucleotides per hour. The extension to a strand of 56 nucleotides took 36 hours making 44/36= 11/9 nucleotides per hour. DNA and its mirror image coexisted peacefully in a solution. One explanation for the absence of mirror life is that the replication and transcription of mirror form was so slow that it lost the fight for survival. Second explanation is that the emergence of mirror forms of DNA polymerase and other enzymes was less probable. Can one learn anything about this?
The crucial finding is that the states of dark proton regarded as part of dark nuclear string can be mapped naturally to DNA, RNA, tRNA, and aminoacid molecules and that vertebrate genetic code can be reproduced naturally. This suggests that genetic code is realized at the level of dark nuclear physics and induces its chemical variant. More generally, biochemistry would be kind of shadow of dark matter physics. A model for dark proton sequences and their helical pairing is proposed and estimates for the parity conserving and breaking parts of Z^{0} interaction potential are deduced. For details see the article Could the replication of mirror DNA teach something about chiral selection? or the chapter Criticality and dark matter of "Hyperfinite factors, padic length scale hypothesis, and dark matter hierarchy". 
One step further in the understanding the origins of lifeI learned about very interesting discovery related to the problem of understanding how the basic building bricks of life might have emerged. RNA (DNA) has nucleotides A,G,C,U (T) as basic building bricks. The first deep question is how the nucleotides A,G,C,U, and T emerged.
See the chapter Quantum criticality and dark matter or the article One step further in the understanding the origins of life.

Phase transition temperatures of 405725 K in superfluid ultradense hydrogen clusters on metal surfacesI received from Jouni a very helpful comment to an earlier blog posting telling about the work of Prof. Leif Holmlid related to cold fusion and comparing Holmlid's model with TGD inspired model (see also the article). This helped to find a new article of Holmlid and Kotzias with title "Phase transition temperatures of 405725 K in superfluid ultradense hydrogen clusters on metal surfaces" published towards the end of April and providing very valuable information about the superdense phase of hydrogen/deuterium that he postulates to be crucial for cold fusion (see this ). The postulated supra dense phase would have properties surprisingly similar to the phase postulated to be formed by dark magnetic flux tubes carrying dark proton sequences generating dark beta stable nuclei by dark weak interactions. My original intuition was that this phase is not superdense but has a density nearer to ordinary condensed matter density. The density however depends on the value of Planck constant and with Planck constant of order m_{p}/m_{e} ≈ .94 ×2^{11}=1880 times the ordinary one one obtains the density reported by Holmlid so that the models become surprisingly similar. The earlier representation were mostly based on the assumption that the distance between dark protons is in Angstrom range rather than picometer range and thus by a factor 32 longer. The modification of the model is straightforward: one prediction is that radiation with energy scale of 110 keV should accompany the formation of dark nuclei. In fact, there are also similarities about which I did not know of!

NMP and adelic physicsIn given padic sector the entanglement entropy (EE) is defined by replacing the logarithms of probabilities in Shannon formula by the logarithms of their padic norms. The resulting entropy satisfies the same axioms as ordinary entropy but makes sense only for probabilities, which must be rational valued or in an algebraic extension of rationals. The algebraic extensions corresponds to the evolutionary level of system and the algebraic complexity of the extension serves as a measure for the evolutionary level. pAdically also extensions determined by roots of e can be considered. What is so remarkable is that the number theoretic entropy can be negative. A simple example allows to get an idea about what is involved. If the entanglement probabilities are rational numbers P_{i}=M_{i}/N, ∑_{i} M_{i}=N, then the primes appearing as factors of N correspond to a negative contribution to the number theoretic entanglement entropy and thus to information. The factors of M_{i} correspond to negative contributions. For maximal entanglement with P_{i}=1/N in this case the EE is negative. The interpretation is that the entangled state represents quantally concept or a rule as superposition of its instances defined by the state pairs in the superposition. Identity matrix means that one can choose the state basis in arbitrary manner and the interpretation could be in terms of "enlightened" state of consciousness characterized by "absence of distinctions". In general case the basis is unique. Metabolism is a central concept in biology and neuroscience. Usually metabolism is understood as transfer of ordered energy and various chemical metabolites to the system. In TGD metabolism could be basically just a transfer of NE from nutrients to the organism. Living systems would be fighting for NE to stay alive (NMP is merciless!) and stealing of NE would be the fundamental crime. TGD has been plagued by a longstanding interpretational problem: can one apply the notion of number theoretic entropy in the real context or not. If this is possible at all, under what conditions this is the case? How does one know that the entanglement probabilities are not transcendental as they would be in generic case? There is also a second problem: padic Hilbert space is not a welldefined notion since the sum of padic probabilities defined as moduli squared for the coefficients of the superposition of orthonormal states can vanish and one obtains zero norm states. These problems disappear if the reduction occurs in the intersection of reality and padicities since here Hilbert spaces have some algebraic number field as coefficient field. By SH the 2D states states provide all information needed to construct quantum physics. In particular, quantum measurement theory.
One can also ask, whether the other mathematical feats performed by idiot savants could be understood in terms of their ability to directly experience  "see"  the prime composition (adelic decomposition) of integer or even rational. This could for instance allow to "see" if integer is  say 3rd  power of some smaller integer: all prime exponents in it would be multiples of 3. If the person is able to generate an NE for which probabilities P_{i}=M_{i}/N are apart from normalization equal to given integers M_{i}, ∑ M_{i}=N, then they could be able to "see" the prime compositions for M_{i} and N. For instance, they could "see" whether both M_{i} and N are 3rd powers of some integer and just by going through trials find the integers satisfying this condition. For details see the chapter Negentropy Maximization Principle or the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

Is cold fusion becoming a new technology?The progress in cold fusion research has been really fast during last years and the most recent news might well mean the final breakthrough concerning practical applications which would include not only wasteless energy production but maybe also production of elements such as metals. The popular article titled Cold Fusion Real, Revolutionary, and Ready Says Leading Scandinavian Newspaper ) tells about the work of Prof. Leif Holmlid and his student SinderZeinerGundersen. For more details about the work of Holmlid et als ee this, this, this, and this. The latter revealed the details of an operating cold fusion reactor in Norway reported to generate 20 times more energy than required to activate it. The estimate of Holmlid is that Norway would need 100 kg of deuterium per year to satisfy its energy needs (this would suggest that the amount of fusion products is rather small to be practical except in situations, where the amounts needed are really small). The amusing coincidence is that I constructed towards the end of the last year a detailed TGD based model of cold fusion ( see this) and the findings of Leif Holmlid served as an important guideline although the proposed mechanism is different. Histories are cruel, and the cruel history of cold fusion begins in 1989, when Pons and Fleichmann reported anomalous heat production involving palladium target and electrolysis in heavy water (deuterium replacing hydrogen). The reaction is impossible in the world governed by text book physics since Coulomb barrier makes it impossible for positively charged nuclei to get close enough. If ordinary fusion is in question, reaction products should involve gamma rays and neutrons and these have not been observed. The community preferred text books over observations and labelled Pons and Fleichman and their followers as crackpots and it became impossible to publish anything in so called respected journals. The pioneers have however continued to work with cold fusion and for few years ago American Chemical Society had to admit that there might be something in it and cold fusion researchers got a status of respectable researcher. There have been several proposals for working reactors such as Rossi's ECat and NASA is performing research in cold fusion. In countries like Finland cold fusion is still a cursed subject and will probably remain so until cold fusion becomes the main energy source in heating of also physics department. The model of Holmlid for cold fusion Leif Holmlid is a professor emeritus in chemistry at the University of Gothemburg. He has quite recently published a work on Rydberg matter in the prestigious journals of APS and is now invited to tell about his work on cold fusion to a meeting of American Physical Society.
Issues not so wellunderstood The process has some poorly understood aspects.
It seems that Holmlid's experiments realize cold fusion and that cold fusion might be soon a wellestablished technology. A real theoretical understanding is however missing. New physics is definitely required and TGD could provide it.
For background see the chapter Cold fusion again or article with the same title. 
Tensor Networks and SmatricesThe concrete construction of scattering amplitudes has been the toughest challenge of TGD and the slow progress has occurred by identification of general principles with many side tracks. One of the key problems has been unitarity. The intuitive expectation is that unitarity should reduce to a local notion somewhat like classical field equations reduce the time evolution to a local variational principle. The presence of propagators have been however the the obstacle for locally realized unitarity in which each vertex would correspond to unitary map in some sense. TGD suggests two approaches to the construction of Smatrix.
Objections It is certainly clear from the beginning that the possibly existing description of Smatrix in terms of tensor networks cannot correspond to the perturbative QFT description in terms of Feynman diagrams.
The overly optimistic vision With these prerequisites one can follow the optimistic strategy and ask how tensor networks could allow to generalize the notion of unitary Smatrix in TGD framework.
For the details see the new chapter Holography and Quantum Error Correcting Codes: TGD View or the article with the same title. 
Holography and Quantum Error Correcting Codes: TGD ViewStrong form of holography is one of the basic tenets of TGD, and I have been working with topological quantum computation in TGD framework with the braiding of magnetic flux tubes defining the spacetime correlates for topological quantum computer programs. Flux tubes are accompanied by fermionic strings, which can become braided too and would actually represent the braiding at fundamental level. Also time like braiding of fermionic lines at lightlike 3surfaces and the braiding of lightlike 3surfaces themselvs is involved and one can talk about spacelike and timelike braidings. These two are not independent being related by dance metaphor (think dancers at the parquette connected by threads to a wall generating both time like and spacelike braidings). I have proposed that DNA and the lipids at cell membrane are connected by braided flux tubes such that the flow of lipids in lipid layer forming liquid crystal would induce braiding storing neural events to memory realized as braiding. I have a rather limited understanding about error correcting codes. Therefore I was happy to learn that there is a conference in Stanford in which leading gurus of quantum gravity and quantum information sciences are talking about these topics. The first lecture that I listened was about a possible connection between holography and quantum error correcting codes. The lecturer was Preskill and the title of the talk was "Holographic quantum errorcorrecting codes: Toy models for the bulk/boundary correspondence" (see this and this). A detailed representation can be found in the article of Preskill et al ). The idea is that time= constant section of AdS, which is hyperbolic space allowing tessellations, can define tensor networks. So called perfect tensors are building bricks of the tensor networks providing representation for holography. There are three observations that put bells ringing and actually motivated this article.
One can criticize AdS/CFT based holography because it has Minkowski space only as a rather nonunique conformal boundary resulting from conformal compactification. Situation gets worse as one starts to modify AdS by populating it with blackholes. And even this is not enough: one can imagine anything inside blackhole interiors: wormholes connecting them to other blackholes, anything. Entire mythology of mystic creatures filling the white (or actually black) areas of the map. Postmodernistic sloppiness is the problem of recent day theoretical physics  everything goes  and this leads to inflationary story telling. Minimalism would be badly needed. AdS/CFT is very probably mathematically correct. The question is whether the underlying conformal symmetry  certainly already huge  is large enough and whether its proper extension could allow to get rid of admittedly artificial features of AdS/CFT. In TGD framework conformal symmetries are generalized thanks due to the metric 2dimensionality of lightcone boundary and of lightlike 3surfaces in general. The resulting generalization of KacMoody group as supersymplectic group replaces finitedimensional Lie group with infinitedimensional group of symplectic transformations and leads to what I call strong form of holography in which AdS is replaced with 4D spacetime surface and Minkowski space with 2D partonic 2surfaces and their lightlike orbits defining the boundary between Euclidian and Minkowskian spacetime regions: this is very much like ordinary holography. Also imbedding space M^{4}× CP_{2} fixed uniquely by twistorial considerations plays an important role in the holography. AdS/CFT realization of holography is therefore not absolutely essential. Even better, its generalization to TGD involves no fictitious boundaries and is free of problems posed by closed timelike geodesics. Perfect tensors and tensor networks realized in terms of magnetic body carrying negentropically entangled dark matter Preskill et al suggest a representation of holography in terms of tensor networks associated with the tesselations of hyperbolic space and utilizing perfect tensors defining what I call negentropic entanglement. Also Minkowski space lightcone has hyperbolic space as proper time=constant section (lightcone proper time constant section in TGD) so that the model for the tensor network realization of holography cannot be distinguished from TGD variant, which does not need AdS at all. The interpretational problem is that one obtains also states in which interior local states are nontrivial and are mapped by holography to boundary states are: holography in the standard sense should exclude these states. In TGD this problem disappears since the macroscopic surface is replaced with what I call wormhole throat (something different as GRT wormhole throat for which magnetic flux tube is TGD counterpart) can be also microscopic. Physics of living matter as physics condensed dark matter at magnetic bodies? A very attractive idea is that in living matter magnetic flux tube networks defining quantum computational networks provide realization of tensor networks realizing also holographic error correction mechanism: negentropic entanglement  perfect tensors  would be the key element! As I have proposed, these flux tube networks would define kind of central nervous system make it possible for living matter to experience consciously its biological body using magnetic body. These networks would also give rise to the counterpart of condensed matter physics of dark matter at the level of magnetic body: the replacement of lattices based on subgroups of translation group with infinite number of tesselations means that this analog of condensed matter physics describes quantum complexity. I am just a novice in the field of quantum error correction (and probably remain such) but from experience I know that the best manner to learn something new is to tell the story with your own words. Of course, I am not at all sure whether this story helps anyone to grasp the new ideas. In any case, if one have a new vision about physical world, the situation becomes considerably easier since creative elements enter to the story retelling. How these new ideas could be realized in the Universe of TGD bringing in new features relating to the new views about spacetime, quantum theory, and living matter and consciousness in relation to quantum physics. For the details see the new chapter Holography and Quantum Error Correcting Codes: TGD View or the article with the same title. 
Reactor antineutrino anomaly as indication for new nuclear physics predicted by TGDA highly interesting new neutrino anomaly has emerged recently. The anomaly appears in two experiments and is referred to as reactor antineutrino anomaly. There is a popular article in Symmetry Magazine about the discovery of the anomalyf in Daya Bay experiment. Bee mentioned in Backreaction blog Reno experiment exhibiting the same anomaly. What happens that more antineutrinos with energies around 5 MeV are produced as should: the anomaly seems to extened to antineutrino energy about 6.3 MeV. What makes me happy is that this anomaly might provide a new evidence for TGD based model of atomic nuclei.

Quantization of thermal conductance and quantum thermodynamicsWaterloo physicists discover new properties of superconductivity is the title of article popurazing the article of David Hawthorn, Canada Research Chair Michel Gingras, doctoral student Andrew Achkar and postdoctoral student Zhihao Hao published in Science. There is a dose of hype involved. As a matter of fact, it has been known for years that electrons flow along stripes, kind of highways in high T_{c} superconductors: I know this quite well since I have proposed TGD inspired model explaining this (see this and this )! The effect is known as nematicity and means that electron orbitals break lattice symmetries and align themselves like a series of rods. Nematicity in long length scales occurs a temperatures below the critical point for superconductivity. In above mentioned cuprate CuO_{2} is studied. For nonoptimal doping the critical temperature for transition to macroscopic superconductivity is below the maximal critical temperature. Long length scale nematicity is observed in these phases. In second article it is however reported that nematicity is in fact preserved above critical temperature as a local order at least up to the upper critical temperature, which is not easy to understand in the BCS theory of superconductivity. One can say that the stripes are short and shortlived so that genuine superconductivity cannot take place. These two observations yield further support for TGD inspired model of high T_{c} superconductivity and biosuperconductivity. It is known that antiferromagnetism is essential for the phase transition to superconductivity but Maxwellian view about electromagnetism and standard quantum theory do not make it easy to understand how. Magnetic flux tube is the first basic new notion provided by TGD. Flux tubes carry dark electrons with scaled up Planck constant h_{eff} =n×h: this is second new notion. This implies scaling up of quantal length scales and in this manner makes also superconductivity possible. Magnetic flux tubes in antiferromagnetic materials form short loops. At the upper critical point they however reconnect with some probability to form loops with look locally like parallel flux tubes carrying magnetic fields in opposite directions. The probability of reverse phase transition is so large than there is a competion. The members of Cooper pairs are at parallel flux tubes and have opposite spins so that the net spin of pair vanishes: S=0. At the first critical temperature the average length and lifetime of flux tube highways are too short for macroscopic superconductivity. At lower critical temperature all flux tubes reconnect permantently average length of pathways becomes long enough. This phase transition is mathematically analogous to percolation in which water seeping through sand layer wets it completely. The competion between the phases between these two temperatures corresponds to quantum criticality in which phase transitions h_{eff}/h=n_{1} ←→n_{2} take place in both directions (n_{1} =1 is the most plausible first guess). Earlier I did not fully realize that Zero Energy Ontology provides an elegant description for the situation (see this and this). The reason was that I though that quantum criticality occurs at single critical temperature rather than temperature interval. Nematicity is detected locally below upper critical temperature and in long length scales below lower critical temperature. During last years it has become clear that condensed matter physicists are discovering with increasing pace the physics predicted by TGD . Same happens in biology. It is a pity that particle physicists have missed the train so badly. They are still trying to cook up something from super string models which have been dead for years. The first reason is essentially sociological: the fight for funding has led to what might be politely called "aggressive competion". Being the best is not enough and there is a temptation to use tricks, which prevent others showing publicly that they have something interesting to say. ArXiv censorship is excellent tool in this respect. Second problem is hopelessly narrow specialization and technicalization: colleague can be defined by telling the algorithms that he is applying. Colleagues do not see physics for particle physics  or even worse, for "physics" or superstrings and branes in 10,11, or 12 dimensions. See the chapter SuperConductivity in ManySheeted SpaceTime. 
Quantization of thermal conductance and quantum thermodynamicsThe finnish research group led by Mikko Möttönen working at Aalto University has made several highly interesting contributions to condensed matter physics during last years (see the popular articles about condensed matter magnetic monopoles and about tying quantum knots: both contributions are interesting also from TGD point of view). This morning I read about a new contribution published in Nature ). What has been shown in the recent work is that quantal thermal conductivity is possible for wires of 1 meter when the heat is transferred by photons. This length is by a factor 10^{4} longer than in the earlier experiments. The improvement is amazing and the popular article tells that it could mean a revolution in quantum computations since heat spoling the quantum coherence can be carried out very effectively and in controlled manner from the computer. Quantal thermal conductivity means that the transfer of energy along wire takes place without dissipation. To understand what is involved consider first some basic definitions. Thermal conductivity k is defined by the formula j= k∇ T, where j is the energy current per unit area and T the temperature. In practice it is convenient to use thermal power obtained by integrating the heat current over the transversal area of the wire to get the heat current dQ/dt as analog of electric current I. The thermal conductance g for a wire allowing approximation as 1D structure is given by conductivity divided by the length of the wire: the power transmitted is P= gΔ T, g=k/L. One can deduce a formula for the conductance at the the limit when the wire is ballistic meaning that no dissipation occurs. For instance, superconducting wire is a good candidate for this kind of channel and is used in the measurement. The conductance at the limit of quantum limited heat conduction is an integer multiple of conductance quantum g_{0}= k_{B}^{2}π^{2}T/3h: g=ng_{0}. Here the sum is over parallel channels. What is remarkable is quantization and independence on the length of the wire. Once the heat carriers are in wire, the heat is transferred since dissipation is not present. A completely analogous formula holds true for electric conductance along ballistic wire: now g would be integer multiple of g_{0}=σ/L= 2e^{2}/h. Note that in 2D system quantum Hall conductance (not conductivity) is integer (or more generally some rational) multiple of σ_{0}= e^{2}/h. The formula in the case of conductance can be "derived" heuristically from Uncertainty Principle Δ EΔ t=h plus putting Δ E = eΔ V as difference of Coulomb energy and Δ t= e/I=e L/ΔV=e/g_{0}. The essential prerequisite for quantal conduction is that the length of the wire is much shorter than the wavelength assignable to the carrier of heat or of thermal energy: λ>> L. It is interesting to find how well this condition is satisfied in the recent case. The wavelength of the photons involved with the transfer should be much longer than 1 meter. An order of magnitude for the energy of photons involve and thus for the frequency and wavelength can be deduced from the thermal energy of photons in the system. The electron temperatures considered are in the range of 10100 mK roughly. Kelvin corresponds to 10^{4} eV (this is more or less all that I learned in thermodynamics course in student days) and eV corresponds to 1.24 microns. This temperature range roughly corresponds to thermal energy range of 10^{6}10^{5} eV. The wave wavelength corresponding to maximal intensity of blackbody radiation is in the range of 2.323 centimeters. One can of course ask whether the condition λ >> L=1 m is consistent with this. A specialist would be needed to answer this question. Note that the gap energy .45 meV of superconductor defines energy scale for Josephson radiation generated by superconductor: this energy would correspond to about 2 mm wavelength much below one 1 meter. This energy does not correspond to the energy scale of thermal photons. I am of course unable to say anything interesting about the experiment itself but cannot avoid mentioning the hierarchy of Planck constants. If one has E= h_{eff}f, h_{eff}=n× h instead of E= hf, the condition λ>> L can be easily satisfied. For superconducting wire this would be true for superconducting magnetic flux tubes in TGD Universe and maybe it could be true also for photons, if they are dark and travel along them. One can even consider the possibility that quantal heat conductivity is possible over much longer wire lengths than 1 m. Showing this to be the case, would provide strong support for the hierarchy of Planck constants. There are several interesting questions to be pondered in TGD framework. Could one identify classical spacetime correlates for the quantization of conductance? Could one understand how classical thermodynamics differs from quantum thermodynamics? What quantum thermodynamics could actually mean? There are several rather obvious ideas.

Could cold fusion solve some problems of the standard view about nucleosynthesis?The theory of nucleosynthesis involves several uncertainties and it is interesting to see whether interstellar cold fusion could provide mechanisms allowing improved understanding of the observed abundances. There are several problems: D abundance is too low unless one assumes the presence of dark matter/energy during Big Bang nucleosynthesis (BBN); there are two Lithium anomalies; there is evidence for the synthesis of boron during BBN; for large redshifts the observed metallic abundances are lower than predicted. The observed abundances of light nuclei are higher than predicted and require that so called cosmic ray spallation producing them via nuclear fission induced by cosmic rays. The understanding of abundances of nuclei heavier than Fe require supernova nucleosynthesis: the problem is that supernova 1987A did not provide support for the rprocess. The idea of dark cold fusion could be taken more seriously if it helped to improve the recent view about nucleosynthesis. In and additional section to the article Cold fusion again I try to develop a systematic view about how cold fusion could help in these problems. I take as a starting point the earlier model for cold dark fusion discussed in the above link and also in blog postings: see this, this, and this. This model could be seen as generalization of supernova nucleosynthesis in which dark variant of neutron and proton capture gives rise to more massive isotopes. Also a variant allowing the capture of dark alpha particle can be considered. Besides this pure standard physcis modification of Big Bang nucleosynthesis is proposed based on the resonant alpha capture of ^{7}Li allowing to produce more Boron and perhaps explain second Li anomaly. See the article Cold fusion again or the chapter Cold fusion again. 
Bacteria behave like spin system: Why?In Physorg there was an interesting article titled Bacteria streaming through a lattice behave like electrons in a magnetic material. The popular article tells arbout article by Dunkel et al with title Ferromagnetic and antiferromagnetic order in bacterial vortex lattices. The following summarizes what has been studied and observed.
If one takes TGD inspired quantum biology as starting point, one can represent more concrete questions and possible answers to them.
See the article Bacteria behave like spin systems: Why? or the chapter Criticality and dark matter. 
Quantum phase transitions and 4D spin glass energy landscapeTGD has led to two descriptions for quantum criticality. The first one relies on the notion of 4D spin glass degeneracy and emerged already around 1990 when I discovered the unique properties of Kähler action. Second description relies on quantum phases and quantum phase transitions and I have tried to explain my understanding about it above. The attempt to understand how these two approaches relate to each other might provide additional insights.

What's New In TGD Inspired View About Phase Transitions?The comment of Ulla mentioned KosterlitzThouless phases transition and its infinite order. I am not a condensed matter physicist so that my knowledge and understanding are rather rudimentary and I had to go to Wikipedia. I realized that I have not payed attention to the classification of types of phase transitions, while speaking of quantum criticality. Also the relationship of ZEO inspired description of phase transitions to that of standard positive energy ontology has remained poorly understood. In the following I try to represent various TGD inspired visions about phase transitions and criticality in organized manner and relate them to the standard description. About thermal and quantum phase transitions It is good to beging with something concrete. Wikipedia article lists examples about different types of phase transitions. These phase transitions are thermodynamical.
Some examples of quantum phase transition like phenomena in TGD framework TGD suggests some examples of quantum phase transition like phenomena.
Question related to TGD inspired description of phase transitions The natural questions are for instance following ones.
Symmetries and phase transitions The notion of symmetry is considerably more complex in TGD framework than in standard picture based on positive energy ontology. There are dynamical symmetries of dark matter states located at the boundaries of CD. For spacetime sheets describing phase transitions there are also dynamical symmetries but they are different. In standard physics one has just states and their symmetries. Conformal gauge symmetries forming a hierarchy: conformal field theories this symmetry is maximal and the hierarchy is absent.
For backbround see the article What's new in TGD inspired view about phase transitions? or the chapter Criticality and dark matter. 
What ZEO can give to the description of criticality?One should clarify what quantum criticality exactly means in TGD framework. In positive energy ontology the notion of state becomes fuzzy at criticality. It is difficult to assign long range fluctuations and associated quanta with any of the phases coexistent at criticality since they are most naturally associated with the phase change. Hence Zero Energy Ontology (ZEO) might show its power in the description of (quantum) critical phase transitions.

More about BMS supertranslationsBee had a blog posting about the new proposal of Hawking, Perry and Strominger (HPS) to solve the blackhole information loss problem. In the article Maxwellian electrodynamics is taken as a simpler toy example.

Solution of the Ni62 mystery of Rossi's ECatIn my blog a reader calling himself Axil made a highly interesting comment. He told that in the cold fusion ashes from Rossi's ECat there is 100 micrometer sized block containing almost pure Ni62 isotope. This is one of Ni isotopes but not the lightest Ni58 whose isotope fraction 67.8 per cent. Axil gave a link providing additional information and I dare to take the freedom to attach it here. Ni62 finding looks really mysterious. One interesting finding is that the size 100 micrometers of the Ni62 block corresponds to the secondary padic length scale for W bosons. Something deep? Let us however forget this clue. One can imagine all kinds of exotic solutions but I guess that it is the reaction kinetics "dark fusion + subsequent ordinary fusion repeated again and again", which leads to a fixed point, which is enrichment by Ni62 isotope. This is like iteration. This guess seems to work!
See the article Cold Fusion Again or the chapter with the same title. 