What's new in

HYPER-FINITE FACTORS, P-ADIC LENGTH SCALE HYPOTHESIS, AND DARK MATTER HIERARCHY

Note: Newest contributions are at the top!



Year 2016

How the transition to superconductive state could be induced by classical radiation?

Blog and Facebook discussions have turned out to be extremely useful and quite often new details to the existing picture emerge from them. We have had interesting exchanges with Christoffer Heck in the comment section to the posting Are microtubules macroscopic quantum systems? and this pleasant surprise occurred also now thanks to a question by Christoffer.

Recall that Bandyopadhyay's team claims to have detected the analog of superconductivity, when microtubules are subjected to AC voltage (see this). The transition to superconductivity would occur at certain critical frequencies. For references and the TGD inspired model see the article.

The TGD proposal for bio-superconductivity - in particular that appearing in microtubules - is same as that for high Tc superconductivity. Quantum criticality,large heff/h=n phases of of Cooper pairs of electrons and parallel magnetic flux tube pairs carrying the members of Cooper pairs for the essential parts of the mechanism. S=0 (S=1) Cooper pairs appear when the magnetic fields at parallel flux tubes have opposite (same) direction.

Cooper pairs would be present already below the gap temperature but possible super-currents could flow in short loops formed by magnetic flux tubes in ferromagnetic system. AC voltage at critical frequency would somehow induce transition to superconductivity in long length scales by inducing a phase transition of microtubules without helical symmetry to those with helical symmetry and fusing the conduction pathways with length of 13 tubulins to much longer ones by reconnection of magnetic flux tubes parallel to the conduction pathways.

The phonon mechanism for the formation of Cooper pair in ordinary superconductivity cannot be however involved with high Tc superconductivity nor bio-superconductivity. There is upper bound of about 30 K for the critical temperature of BCS superconductors. Few days ago I learned about high Tc superconductivity around 500 K for n-alkanes (see the blog posting) so that the mechanism for high Tc is certainly different .

The question of Christoffer was following. Could microwave radiation for which photon energies are around 10-5 eV for ordinary value of Planck constant and correspond to the gap energy of BCS superconductivity induce phase transition to BCS super-conductivity and maybe to micro-tubular superconductivity (if it exists at all)?

This inspires the question about how precisely the AC voltage at critical frequencies could induce the transition to high Tc- and bio-super-conductivity. Consider first what could happen in the transition to high Tc super-conductivity.

  1. In high Tc super conductors such as copper-oxides the anti-ferromagnetism is known to be essential as also 2-D sub-lattice structures. Anti-ferromagnetism suggests that closed flux tubes form of squares with opposite directions of magnetic field at the opposite sides of square. The opposite sides of the square would carry the members of Cooper pair.
  2. At quantum criticality these squares would reconnect to very long flattened squares by reconnection. The members of Cooper pairs would reside at parallel flux tubes forming the sides of the flattened square. Gap energy would consists interaction energies with the magnetic fields and the mutual interaction energy of magnetic moments.

    This mechanism does not work in standard QM since the energies involved are quite too low as compared to thermal energy. Large heff/h=n would however scale up the magnetic energies by n. Note that the notion of gap energy should be perhaps replaced with collective binding energy per Cooper pair obtained from the difference of total energies for gap phase formed at higher temperature and for superconducting phase formed at Tc by dividing with the number of Cooper pairs.

    Another important distinction to BCS is that Cooper pairs would be present already below gap temperature. At quantum criticality the conduction pathways would become much longer by reconnection. This would be represent an example about "topological" condensed matter physics. Now hover space-time topology would be in question.

  3. The analogs of phonons could be present as transversal oscillations of magnetic flux tubes: at quantum criticality long wave length "magneto-phonons" would be present. The transverse oscillations of flux tube squares would give rise to reconnection and formation of
If the irradiation or its generalization to high Tc works the energy of photon should be around gap energy or more precisely around energy difference per Cooper pair for the phases with long flux tubes pairs and short square like flux tubes.
  1. To induce superconductivity one should induce formation of Cooper pairs in BCS superconductivity. In high Tc super-conductivity it should induce a phase transition in which small square shaped flux tube reconnect to long flux tubes forming the conducting pathways. The system should radiate away the energy difference for these phases: the counterpart of binding energy could be defined as the radiated energy per Cooper pair.
  2. One could think the analog of stimulated emission. Assume that Cooper pairs have two states: the genuine Cooper pair and the non-superconducting Cooper pair. This is the case in high Tc superconductivity but not in BCS superconductivity, where the emergence of superconductivity creates the Cooper pairs. One can of course ask whether one could speak about the analog of stimulated emission also in this case.
  3. Above Tc but below gap temperature one has the analog of inverted population: all pairs are in higher energy state. The irradiation with photon beam with energy corresponding to energy difference gives rise to stimulated emission and the system goes to superconducting state with a lower energy state with a lower energy.
This mechanism could explain the finding of Bandyopadhyay's team that AC perturbation at certain critical frequencies gave rise to a ballistic state (no dependence of the resistance on the length of the wire so that the resistance must be located at its ends). The team used photons with frequency scales of MHz, GHz, and THz. The corresponding photon energy scales are about 10-8 eV, 10-5, 10-2 eV for the ordinary value of Planck constant and are below thermal energies.

In TGD classical radiation should have also large heff/h=n photonic counterparts with much larger energies E=heff×f to explain the quantal effects of ELF radiation at EEG frequency range on brain (see this). The general proposal is that heff equals to what I have called gravitational Planck constant hbargr=GMm/v0 (see this or this). This implies that dark cyclotron photons have universal energy range having no dependence on the mass of the charged particle. Bio-photons have energies in visible and UV range much above thermal energy and would result in the transition transforming dark photons with large heff = hgr to ordinary photons.

One could argue that AC field does not correspond to radiation. In TGD framework this kind of electric fields can be interpreted as analogs of standing waves generated when charged particle has contacts to parallel "massless extremals" representing classical radiation with same frequency propagating in opposite directions. The net force experienced by the particle corresponds to a standing wave.

Irradiation using classical fields would be a general mechanism for inducing bio-superconductivity. Superconductivity would be generated when it is needed. The findings of Blackman and other pioneers of bio-electromagnetism about quantal effects of ELF em fields on vertebrate brain stimulated the idea about dark matter as phases with non-standard value of Planck constant. Also these finding could be interpreted as a generation of superconducting phase by this phase transition.

For background see the chapter Super-Conductivity in Many-Sheeted Space-Time.



Room temperature superconductivity for alkanes

Super conductivity with critical temperature of 231 C for n-alkanes containing n=16 or more carbon atoms in presence of graphite has been reported (see this).

Alkanes (see this) can be linear (CnH2n+2) with carbon backbone forming a snake like structure, branched (CnH2n+2, n > 2) in which carbon backbone splits in one, or more directions or cyclic (CnH2n) with carbon backbone forming a loop. Methane CH4 is the simplest alkane.

What makes the finding so remarkable is that alkanes serve as basic building bricks of organic molecules. For instance, cyclic alkanes modified by replacing some carbon and hydrogen atoms by other atoms or groups form aromatic 5-cycles and 6-cycles as basic building bricks of DNA. I have proposed that aromatic cycles are superconducting and define fundamental and kind of basic units of molecular consciousness and in case of DNA combine to a larger linear structure.

Organic high Tc superconductivity is one of the basic predictions of quantum TGD. The mechanism of super-conductivity would be based on Cooper pairs of dark electrons with non-standard value of Planck constant heff=n×h implying quantum coherence is length scales scaled up by n (also bosonic ions and Cooper pairs of fermionic ions can be considered).

The members of dark Cooper pair would reside at parallel magnetic flux tubes carrying magnetic fields with same or opposite direction: for opposite directions one would have S=0 and for the same direction S=1. The cyclotron energy of electrons proportional to heff would be scaled up and this would scale up the binding energy of the Cooper pair and make super-conductivity possible at temperatures even higher than room temperature (see this).

This mechanism would explain the basic qualitative features of high Tc superconductivity in terms of quantum criticality. Between gap temperature and Tc one one would have superconductivity in short scales and below Tc superconductivity in long length scales. These temperatures would correspond to quantum criticality at which large heff phases would emerge.

What could be the role of graphite? The 2-D hexagonal structure of graphite is expected to be important as it is also in the ordinary super-conductivity: perhaps graphite provides long flux tubes and n-alkanes provide the Cooper pairs at them. Either graphite, n-alkane as organic compound, or both together could induce quantum criticality. In living matter quantum criticality would be induced by different mechanism. For instance, in microtubules it would be induced by AC current at critical frequencies.

See chapter Super-Conductivity in Many-Sheeted Space-Time and the article New findings about high-temperature super-conductors.



More precise interpretation of gravitational Planck constant

The notion of gravitational Planck constant hgr=GMm/v0 was introduced originally by Nottale. In TGD it was interpreted in terms of astrophysical quantum coherence. The interpretation was that hgr characterizes a gravitational flux tube connecting masses M and m and v0 is a velocity parameter - some characteristic velocity assignable to the system.

It has become clear that a more precise formulation of the rather loose ideas about how gravitational interaction is mediated by flux tubes is needed.

  1. The assumption treats the two masses asymmetrically.
  2. A huge number of flux tubes is needed since every particle pair M-m would involve a flux tube. It would be also difficult to understand the fact that one can think the total gravitational interaction in Newtonian framework as sum over interactions with the composite particles of M. In principle M can be decomposed into parts in many manners - elementary particles and their composites and larger structures formed from them: there must be some subtle difference between these different compositions - all need not be possible - not seen in Newtonian and GRT space-time but maybe having representation in many-sheeted space-time and involving hgr.
  3. Flux tube picture in the original form seems to lead to problems with the basic properties of the gravitational interaction: namely superposition of gravitational fields and absence or at least smallness of screening by masses between M and m. One should assume that the ends of the flux tubes associated with the pair pair M-m move as m moves with respect to M. This looks too complex.

    Linear superposition and absence of screening can be understood in the picture in which particles form topological sum contacts with the flux tubes mediating gravitational interaction. This picture is used to deduce QFT-GRT limit of TGD. Note that also other space-time sheets can mediate the interaction and pairs of MEs and flux tubes emanating from M but not ending to m are one possible option. In the following I however talk about flux tubes.

These problems find a solution if hgr characterizes the magnetic body (MB) of a particle with mass m topologically condensed to a flux tube carrying total flux M. m can also correspond to a mass larger than elementary particle mass. This makes the situation completely symmetric with respect to M and m. The essential point is that the interaction takes place via touching of MB of m with flux tubes from M.
  1. In accordance with the fractality of the many-sheeted space-time, the elementary particle fluxes from a larger mass M can combine to a sum of fluxes corresponding to masses Mi<M with ∑ Mi=M at larger flux tubes with hbargr=GMMi/v0,i> hbar. This can take place in many manners, and in many-sheeted space-time gives rise to different physical situations.

    Due to the large value of hgr it is possible to have macroscopic quantum phases at these sheets with a universal gravitational Compton length Lgr= GMim/v0. Here m can be also a mass larger than elementary particle mass. In fact, the convergence of perturbation theory indeed makes the macroscopic quantum phases possible. This picture holds true also for the other interactions. Clearly, many-sheeted space-time brings in something new, and there are excellent reasons to believe that this new relates to the emergence of complexity - say via many-sheeted tensor networks (see this).

  2. Quantum criticality would occur near the boundaries of the regions from which flux runs through wormhole contacts from smaller to larger flux sheets and would be thus associated with boundaries defined by the throats of wormhole contacts at which the induced metric changes from Minkowskian to Euclidian.
  3. This picture implies that fountain effect - one of the applications of large hgr phase is a kind of antigravity effect for dark matter - maybe even for non-microscopic masses m - since the larger size of MB implies larger average distance from the source of the gravitational flux and the experienced gravitational field is weaker. This might have technological applications some day.
This picture is a considerable improvement but there are still problems to ponder. In particular, one should understand why the integer n= heff/h= hgr/h interpreted as a number of sheets of the singular covering space of MB of m emerges topologically. The large value of hgr implies a huge number of sheets.

Could the flux sheet covering associated with Mi code the value of Mi using as unit Planck mass as the number of sheets of this covering? One would have N=M/MPl sheeted structure with each sheet carrying Planckian flux. The fluxes experienced by the MB of m in turn would consist of sheets carrying fusion nm= MPlv0/m Planckian fluxes so that the total number of sheets would be reduced to n= N/nm= GMm/v0 sheets.

Why this kind of fusion of Planck fluxes to larger fluxes should happen? Could quantum information theory provide clues here? And why v0 is involved?

See the chapter Criticality and dark matter.



Could Pollack effect make cell membrane a self-loading battery?

Elemer Rosinger had a Facebook link to an article telling about Clarendon dry pile, a very long-lived battery providing energy for an electric clock (see this, this, and this ). This clock known also as Oxford bell has been ringing for 175 years now and the article suggests that the longevity of the battery is not really understood. The bell is not actually ringing so loud that human ear could hear it but one can see the motion of the small metal sphere between the oppositely charged electrodes of the battery in the video.

The principle of the clock is simple. The gravitational field of earth is also present. When the sphere touches the negative electrode, it receives a bunch of electrons and gives the bunch away as it touches positive electrode so that a current consisting of these bunches is running between electrons. The average current during the oscillation period of 2 seconds is nanoampere so that nanocoulomb of charge is transferred during each period (Coulomb corresponds to a 6.242 × 1018 elementary charges (electrons)).

The dry pile was discovered by priest and physicist Giuseppe Zamboni at 1812. The pile consists of 2,000 pairs of pairs of discs of tin foil glued to paper impregnated with Zinc sulphate and coated on the other side with manganese dioxide: 2,000 thin batteries in series. The operation of battery gradually leads to the oxidation of Zinc and the loss of manganese dioxide but the process takes place very slowly. One might actually wonder whether it takes place too slowly so that some other source of energy than the electrostatic energy of the battery would be keep the clock running. Karpen pile is analogous battery discover by Vasily Karpen. It has now worked for 50 years.

Cold fusion is associated with electrolysis. Could the functioning of this mystery clock involve cold fusion taken seriously even by American Physical Society thanks to the work of the group of prof. Holmlid. Electrolytes have of course been "understood" for aeons. Ionization leads to charge separation and current flows in the resulting voltage. With a feeling of deep shame I must confess that I cannot understand how the ionization is possible in standard physics. This of course might be just my immense stupidity - every second year physics student would immediately tell that this is "trivial" - so trivial that he would not even bother to explain why. The electric field between the electrodes is immensely weak in the scale of molecules. How can it induce the ionisation? Could ordinary electrolytes involve new physics involving cold fusion liberating energy? These are the questions, which pop up in my stupid mind. Stubborn as I am in my delusions, I have proposed what this new physics might be with inspiration coming from strange experimental findings of Gerald Pollack, cold fusion, and my own view about dark matter has phases of ordinary matter with non-standard value heff=n× h of Planck constant. Continuing with my weird delusions I dare ask: Could cold fusion provide the energy for the "miracle" battery?

To understand what might be involved one must first learn some basic concepts. I am trying to do the same.

  1. Battery consistes of two distinct electrochemical cells. Cell consists of electrode and electrolyte. The electrodes are called anode and catode. By definition electron current along external wire flows to catode and leaves anode.
  2. There are also ionic currents flowing inside the battery. In absence of the ionic currents the electrodes of the battery lose their charge. In the loading the electrodes get their charges. In the ideal situation the ionic current is same as electron current and the battery does not lose its charging. Chemical reactions are however taking place near and at the electrodes and in their reversals take place during charging. Chemical changes are not completely reversible so that the lifetime of the battery is finite.

    The ionic current can be rather complex: the carriers of the positive charge from anode can even change during the charge transfer: what matters that negative charge from catode is transferred to anode in some manner and this charge logistics can involve several steps. Near the catode the currents of positive ions (cations) and electrons from the anode combine to form neutral molecules. The negative current carriers from catode to the anode are called anions.

  3. The charge of the clectrochemical cell is in the electrolyte near the surface of the electrode rather than inside it as one might first think and the chemical processes involve neutralization of ion and the transfer of neutral outcome to or from the electrode.
  4. Catode - or better, the electrochemical cell containing the catode - can have both signs of charge. For positive charge one has a battery liberating energy as the electron current connecting the negative and positive poles goes through the load, such as LED. For negative charge current flows only if there is external energy feed: this is loading of the battery. External voltage source and thus energy is needed to drive the negative charges and positive charges to the electrodes. The chemical reactions involved can be rather complex and proceed in reverse direction during the loading process. Travel phone battery is a familiar example.

    During charging the roles of the anode and catode are changed: understanding this helps considerably.

Could cold fusion help to understand why the Clarendon dry pile is so long lived?
  1. The battery is series of very many simpler batteries. The mechanism should reduce to the level of single building brick. This is assumed in the following.
  2. The charge of the battery tends to be reduced unless the ionic and electronic currents are identical. Also chemical changes occur. The mechanism involved should oppose the reduction of the charging by creating positive charge to the catode and negative charge to the anode or induce additional voltage between the electrodes of the battery inducing its loading. The energy feed involved might also change the direction of the basic chemical reactions as in the ordinary loading by raising the temperature at catode or anode.
  3. Could be formation of Pollack's exclusion zones (EZs) in the elecrolytic cell containing the anode help to achieve this? EZs carry a high electronic charge. According to TGD based model protons are transformed to dark protons at magnetic flux tubes. If the positive dark charge at the flux tubes is transferred to the electrolytic cell containing catode and transformed to ordinary charge, it would increase the positive charge of the catode. The effect would be analogous to the loading of battery. The energy liberated in the process would compensate for the loss of charge energy due to electronic and ionic currents.
  4. In the ordinary loading of the battery the voltage between batteries induces the reversal of the chemical processes occuring in the battery. This is due to the external energy feed. Could the energy feed from dark cold fusion induce similar effects now? For instance, could the energy liberated at the catode as positively charged dark nuclei transform to ordinary ones raise the temperature and in this manner feed the energy needed to change the direction of the chemical reactions.
Elemer Rosinger had a Facebook link to an article telling about Clarendon dry pile, a very long-lived battery providing energy for an electric clock (see this, this, and this ). This clock known also as Oxford bell has been ringing for 175 years now and the article suggests that the longevity of the battery is not really understood. The bell is not actually ringing so loud that human ear could hear it but one can see the motion of the small metal sphere between the oppositely charged electrodes of the battery in the video.

The principle of the clock is simple. The gravitational field of earth is also present. When the sphere touches the negative electrode, it receives a bunch of electrons and gives the bunch away as it touches positive electrode so that a current consisting of these bunches is running between electrons. The average current during the oscillation period of 2 seconds is nanoampere so that nanocoulomb of charge is transferred during each period (Coulomb corresponds to a 6.242 × 1018 elementary charges (electrons)).

The dry pile was discovered by priest and physicist Giuseppe Zamboni at 1812. The pile consists of 2,000 pairs of pairs of discs of tin foil glued to paper impregnated with Zinc sulphate and coated on the other side with manganese dioxide: 2,000 thin batteries in series. The operation of battery gradually leads to the oxidation of Zinc and the loss of manganese dioxide but the process takes place very slowly. One might actually wonder whether it takes place too slowly so that some other source of energy than the electrostatic energy of the battery would be keep the clock running. Karpen pile is analogous battery discover by Vasily Karpen. It has now worked for 50 years.

Cold fusion is associated with electrolysis. Could the functioning of this mystery clock involve cold fusion taken seriously even by American Physical Society thanks to the work of the group of prof. Holmlid. Electrolytes have of course been "understood" for aeons. Ionization leads to charge separation and current flows in the resulting voltage. With a feeling of deep shame I must confess that I cannot understand how the ionization is possible in standard physics. This of course might be just my immense stupidity - every second year physics student would immediately tell that this is "trivial" - so trivial that he would not even bother to explain why. The electric field between the electrodes is immensely weak in the scale of molecules. How can it induce the ionisation? Could ordinary electrolytes involve new physics involving cold fusion liberating energy? These are the questions, which pop up in my stupid mind. Stubborn as I am in my delusions, I have proposed what this new physics might be with inspiration coming from strange experimental findings of Gerald Pollack, cold fusion, and my own view about dark matter has phases of ordinary matter with non-standard value heff=n× h of Planck constant. Continuing with my weird delusions I dare ask: Could cold fusion provide the energy for the "miracle" battery?

To understand what might be involved one must first learn some basic concepts. I am trying to do the same.

  1. Battery consistes of two distinct electrochemical cells. Cell consists of electrode and electrolyte. The electrodes are called anode and catode. By definition electron current along external wire flows to catode and leaves anode.
  2. There are also ionic currents flowing inside the battery. In absence of the ionic currents the electrodes of the battery lose their charge. In the loading the electrodes get their charges. In the ideal situation the ionic current is same as electron current and the battery does not lose its charging. Chemical reactions are however taking place near and at the electrodes and in their reversals take place during charging. Chemical changes are not completely reversible so that the lifetime of the battery is finite.

    The ionic current can be rather complex: the carriers of the positive charge from anode can even change during the charge transfer: what matters that negative charge from catode is transferred to anode in some manner and this charge logistics can involve several steps. Near the catode the currents of positive ions (cations) and electrons from the anode combine to form neutral molecules. The negative current carriers from catode to the anode are called anions.

  3. The charge of the clectrochemical cell is in the electrolyte near the surface of the electrode rather than inside it as one might first think and the chemical processes involve neutralization of ion and the transfer of neutral outcome to or from the electrode.
  4. Catode - or better, the electrochemical cell containing the catode - can have both signs of charge. For positive charge one has a battery liberating energy as the electron current connecting the negative and positive poles goes through the load, such as LED. For negative charge current flows only if there is external energy feed: this is loading of the battery. External voltage source and thus energy is needed to drive the negative charges and positive charges to the electrodes. The chemical reactions involved can be rather complex and proceed in reverse direction during the loading process. Travel phone battery is a familiar example.

    During charging the roles of the anode and catode are changed: understanding this helps considerably.

Could cold fusion help to understand why the Clarendon dry pile is so long lived?
  1. The battery is series of very many simpler batteries. The mechanism should reduce to the level of single building brick. This is assumed in the following.
  2. The charge of the battery tends to be reduced unless the ionic and electronic currents are identical. Also chemical changes occur. The mechanism involved should oppose the reduction of the charging by creating positive charge to the catode and negative charge to the anode or induce additional voltage between the electrodes of the battery inducing its loading. The energy feed involved might also change the direction of the basic chemical reactions as in the ordinary loading by raising the temperature at catode or anode.
  3. Could be formation of Pollack's exclusion zones (EZs) in the elecrolytic cell containing the anode help to achieve this? EZs carry a high electronic charge. According to TGD based model protons are transformed to dark protons at magnetic flux tubes. If the positive dark charge at the flux tubes is transferred to the electrolytic cell containing catode and transformed to ordinary charge, it would increase the positive charge of the catode. The effect would be analogous to the loading of battery. The energy liberated in the process would compensate for the loss of charge energy due to electronic and ionic currents.
  4. In the ordinary loading of the battery the voltage between batteries induces the reversal of the chemical processes occuring in the battery. This is due to the external energy feed. Could the energy feed from dark cold fusion induce similar effects now? For instance, could the energy liberated at the catode as positively charged dark nuclei transform to ordinary ones raise the temperature and in this manner feed the energy needed to change the direction of the chemical reactions.
This model might have an interesting application to the physics of cell membrane.
  1. Cell membrane consisting of two lipid layers defines the analog of a battery. Cell interior plus inner lipid layer (anode) and cell exterior plus outer lipid layer (catode) are analogs of electrolyte cells.

    What has been troubling me for two decades is how this battery manages to load itself. Metabolic energy is certainly needed and ADP-ATP mechanism is essential element. I do not however understand how the membrane manages to keep its voltage.

    Second mystery is why it is hyperpolarization rather than polarization, which tends to stabilize the membrane potential in the sense that the probability for the spontaneous generation of nerve pulse is reduced. Neither do I understand why depolarization (reduction of the membrane voltage) leads to a generation of nerve pulse involving rapid change of the sign of the membrane voltage and the flow of various ionic currents between the interior and exterior of the cell.

  2. In the TGD inspired model for nerve pulse cell interior and cell exterior or at least their regions near to lipid layers are regarded as super-conductors forming a generalized Josephson junction. For the ordinary Josephson junction the Coulombic energy due to the membrane voltage defines Josephson energy. Now Josephson energy is replaced by the ordinary Josephson energy plus the difference of cyclotron energies of the ion at the two sides of the membrane. Also ordinary Josephson radiation can be generated. The Josephson currents are assumed to run along magnetic flux tubes connecting cell interior and exterior. This assumption receives support from the strange finding that the small quantal currents associated with the membrane remain essentially the same when the membrane is replaced with polymer membrane.
  3. The model for Clarendon dry pile suggests an explanation for the self-loading ability. The electrolytic cell containing the anode corresponds to the negatively charged cell interior, where Pollack's EZs would be generated spontaneously and the feed of protonic charge to the outside of the membrane would be along flux tubes as dark protons to minimize dissipation. Also ions would flow along them. The dark protons driven to the outside of the membrane transform to ordinary ones or remain dark and flow spontaneously back and provide the energy needed to add phosphate to ADP to get ATP.
  4. The system could be quantum critical in the sense that a small reduction of the membrane potential induces nerve pulse. Why the ability to generate Pollack's EZs in the interior would be lost for a few milliseconds during nerve pulse? The hint comes from the fact that Pollack's EZs can be generated by feeding infrared radiation to a water bounded by gel. Also the ordinary Josephson radiation generated by cell membrane Josephson junction has energy in infrared range!

    Could the ordinary Josephson radiation generate EZs by inducing the ionization of almost ionized hydrogen bonded pairs of water molecules. The hydrogen bonded pairs must be very near to the ionization energy so that ordinary Josephson energy of about .06 eV assignable to the membrane voltage is enough to induce the ionization followed by the formation of H3/2O. The resulting EZ would consist of layers with the effective stoichiometry H3/2O.

    As the membrane voltage is reduced, Josephson energy would not be anymore enough to induce the ionization of hydrogen bonded pair of water molecules, EZs are not generated, and the battery voltage is rapidly reduced: nerve pulse is created. In the case of hyperpolarization the energy excees the energy needed for ionization and the situation becomes more stable.

  5. This model allows also to understand the effect of anesthetes.Anesthetes could basically induce hyperpolarization so that Josephson photons would continually generate Pollack's EZ:s and creating of dark particles at the magnetic flux tubes. This need not mean that consciousness is lost at the cell level. Only sensory and motor actions are prevented because nerve pulses are not possible. This prevents formation of sensory and motor mental images at our level of hierarchy.

    Meyer-Overton correlation states that the effectiveness of the anesthete correlates with its solubility to the lipid membrane. This is the case if the presence of anesthete in the membrane induces hyperpolarization so that the energies of the photons of Josephson radiation would be higher than needed for the generation of EZs accompanied by magnetic flux tubes along which ionic Josephson currents would flow between cell interior and exterior. For these quantal currents evidence exists. In the case of battery these dark ions would flow from the cell containing anode to that containing catode. For depolarization the energy of Josephson photons would be too low to allow the kicking off protons from hydrogen bonded pairs of water molecules so that EZs would not be created and self-loading would stop and nerve pulse would be generated.

See the chapter Cold fusion again. See also the article Could Pollack effect make cell membrane a self-loading battery? .



ER=EPR and TGD

ER=EPR correspondence proposed by Leonard Susskind and Juan Maldacena in 2014 (see also this) has become the most fashionable fashion in theoretical physics. Even the idea that space-time could emerge from ER-EPR has been proposed.

ER

ER (Einstein-Rosen) bridge in turn is purely classical notion associated with general relativity theory (GRT). ER bridge is illustrated in terms of a fold of space-time. Locally there are two sheets near to each other and connected by a wormhole: these sheets are actually parts of the same sheet. Along the bridge the distance between two systems can be very short. Along folded sheet it can be very long. This suggest some kind of classical non-locality in the sense that the physics around the two throats of wormhole can be strongly correlated: the non-locality would be implied by topology. This is not in accordance with the view of classical physics in Minkowski space-time.

EPR

EPR (Einstein-Podolsky-Rosen) paradox states that it is possible to measure both position and momentum of two particles more accurately than Heisenberg Uncertainty Principle allows unless the measurement involves instantaneous transfer of information between particles denied by special relativity. The conclusion of EPR was that quantum theory is incomplete and should be extended by introducing hidden variables. The argument was based on classical physics view a bout microcausality.

Later the notion of quantum entanglement became an established notion and it became clear that no classical superluminal transfer of information is needed. If one accepts the basic rules of quantum measurement theory - in particular tensor products of distant systems - EPR paradox disappears. Entanglement is of course a genuinely non-nonlocal phenomenon not encountered in classical physics and one could wonder whether it might have classical sace-time correlate after all. State function reduction becomes the problem and has remained the ugly duckling of quantum theory. Unfortunately, this ugly duckling has become a taboo and is surrounded by a thick cloud of messy interpretations. Hence the situation is still far from settled.

At time EPR and ER were proposed, there was no idea about possible connection between these two ideas. Both notions involve unexpected non-locality and one might however ask whether there might be a connection.

ER-EPR

In some sense ER=EPR could be seen as kind of victory for Einstein. There could be after all a classical space-time correlate for entanglement and for what happens state function reduction for a system induces state function reduction in distant entangled system. It however seems that quantum theory does not allow a signal travelling along the wormhole throat connecting the entangled systems.

What ER= EPR says that maximal entanglement for blackholes is somehow dual to Einstein-Rosen bridge (wormhole). Susskind and Maldacena even suggests that this picture generalizes to entanglement between any kind of systems and that even elementary particles are connected by Planckian wormholes.

The next step has been to argue that entanglement is more fundamental than space-time, and that space-time would emerge. The attempts to realize the idea involve holography and already this means introduction of 2-D surfaces in 3-D space so that the argument becomes circular. To my opinion the emergence of space-time is doomed to remain one of the many fashions of theoretical physics, which last few years and are then lost to sands of time. These fashions reflect the deep crisis of theoretical physics, which has lasted for four decades, and are as such a good sign telling that people at least try.

The motivation for following TGD inspired arguments was one of the arguments against ER=EPR: ER=EPR does not conform with the linearity of quantum mechanics. The state pairs in the superposition defining entangled state are unentangled (separable) and there should be no wormhole connecting the systems in this case. In an entangled state there should be wormhole. This makes sense only if the space-time geometry couples to quantum dynamics so that one must give up the idea that one has Schödinger amplitudes in fixed background and linear superposition for them. This looks weird even in GRT space-time.

Some background about TGD

Before discussing what ER-EPR corresponds in TGD few words about quantum TGD are in order.

  1. The formulation of TGD in terms of geometry of "world of classical worlds" (WCW) consisting of 3-surfaces, which are holographically related to 4-D space-time surfaces. This holography is implied by General Goordinate Invariance (GCI). One can say that space-time surfaces as preferred extremal of Kähler action are analogous to Bohr orbits and that classical theory is an exact part of quantum theory.

    What I call strong form of GCI (SGCI) implies strong form of holography (SH) stating that string world sheets and partonic 2-surfaces dictate the dynamics. A slightly weaker form of SH is that the light-like orbits of partonic 2-surfaces, which are metrically 2-dimensional and lead to a generalization of conformal invariance dictate the dynamics. The additional degrees of freedom would be discrete and label conformal equivalence classes of the light-like orbits.

  2. Quantum states are described as spinor fields in WCW - WCW spinors correspond to fermionic Fock states. Zero energy ontology (ZEO) is an important element of the picture and means that physical states are replaced by analogs of physical events- pairs of states whose members reside at the boundaries of causal diamond (CD) with opposite conserved quantum numbers: this guarantees conservation laws. CD is obtained from causal diamond of M4 defined as intersection of future and past directed light-cones by replacing its points with CP2 and has light-like boundaries. Quantum measurement theory based on ZEO resolves the basic paradox of quantum measurement theory and extends it to a theory of consciousness.
  3. Quantum classical correspondence (QCC) is an essential element of quantum TGD and relates to quantum measurement theory: the results of measurements are always interpreted classically. In particular, space-time surfaces as preferred extemals of Kähler action (the lift of Kähler action to twistor space brings in cosmological constant term to 4-D Kähler action in dimensional reduction) define classical correlates for quantum states. Conserved fermionic quantum numbers identified as eigenvalues for Cartan algebra of symmetries are equal to the corresponding classical charges assignable to Kähler action. Already this implies that space-time interior is different for unentangled fermion resp. entangled fermion pairs.

The counterpart of ER=EPR in TGD framework

The TGD variant of ER=EPR has been part of TGD for two decades but have remained un-noticed since superstring hegemony has dominated the theory landscape. There are still many profound ideas to be re-discovered but their realization in the framework of GRT is practically impossible since they relate closely the vision about space-times as 4-surfaces in M4× CP2. What ER=EPR then corresponds in TGD.

  1. In TGD framework one gets rid of blackholes. One can say that they are replaced by the regions of space-time with Euclidian signature of the induced metric. This is of course something completely new from GRT viewpoint. One can say that these regions correspond to 4-D counterparts for the lines of scattering diagrams. Minkowskian and Euclidian space-time regions are separated by light-like 3-surfaces at which the induced 4-metric is singular in the sense that its determinant vanishes. The 4-D tangent space of space-time surface becomes locally 3-D. These surfaces can be identified as light-like orbits of partonic 2-surfaces starting from and and ending at the light-like boundaries of CD.
  2. The orbits of partonic 2-surfaces replace blackhole horizons and can be regarded as carriers of fundamental fermionic quantum numbers and therefore elementary particle numbers. For instance, elementary particles can be seen as pairs of wormhole contacts connected at both sheets by a magnetic flux tube carrying monopole flux so that a closed flux tube results. SH implies that all data about quantum state can be assigned with these 2-D surfaces at future and past ends of CD. There could be wave function in discrete degrees of freedom assignable to the light-like orbits (their conformal equivalence classes).
  3. Wormholes of GRT are replaced with the magnetic flux tubes, which can be homologically trivial or non-trivial. In the latter case wormhole throat behaves effectively as magnetic charge and these are expected to be relevant for elementary particles. The magnetic flux tubes, which are homologically trivial are nearly vacuum extemals and gravitational interactions are expected to be mediated along them.
  4. The counterpart of ER=EPR is that magnetic flux tubes serve as spacetime correlates of entanglement long scales. In CP2 scales wormhole contacts serve in the same role: for instance, gauge bosons correspond to entangled fermion-antifermion pairs at opposite throats of the wormhole of length about CP2 size.

    This should follow from QCC and the challenge is to understand why un-entangled wormhole throats are not connected by magnetic flux tube but entangled ones are.

    The key point is SH. The linearity of quantum theory need to hold true only at the orbits of partonic 2-surfaces and at string world sheets for second quantized induced spinor fields. In the interior of space-time it need not hold true. As a matter, fact it cannot be true since QCC demands that different fermionic Fock states correspond to different space-time interiors.

    The dependence of fermionic Cartan charges on fermionic quantum numbers and entanglement implies the dependence of corresponding classical conserved charges on fermion state. The natural conjecture is that entanglement demands fermionic strings connecting the partonic 2-surfaces assignable to magnetic flux tubes. Interior degrees of freedom would code for the conserved charges of fermionic states.

  5. In TGD framework there is no need to assume a signal between two systems during state function reduction even classically. The magnetic flux tubes fuse the wormhole throats to single system behaving like single particle. Indeed, TGD as a generalization of string model replaces point-like particles with 3-D surfaces and by SH these are replaced with the (conformal equivalence classes of the orbits of) partonic 2-surfaces.
  6. This picture does not imply the emergence of space-time. The entanglement between fermionic states associated with different partonic 2-surfaces breaks the effective 2-dimensionality of the theory predicted otherwise (note that discrete degrees of freedom associated with light-like 3-surfaces are however possible). Entanglement forces genuine 3-dimensionality of the dynamics rather than emergence of 3-space.
The conclusion is that due to SH at space-time level the superposition for fermionic Fock states (also that in orbital WCW degrees of freedom) is consistent with QCC. Notice that fundamental space-time spinor fields identified as induced spinor fields are localized at string world sheets having boundaries at the orbits of partonic 2-surfaces (besides SH and number theoretical vision also the well-definedness of em charge for spinor modes demands this) and therefore cannot as such correspond to the spinor fields of QFT limit. These correspond to the modes of the classical imbedding space spinor fields characterizing the ground states for the representations of super-symplectic algebra acting as isometries of WCW and its extension to Yangian algebra with genetors multi-local with respect to partonic surfaces and generating naturally strongly (perhaps maximally) entangled states. In fact, in TGD framework the entanglement would be always algebraic by number theoretic universality and would be maximally negentropic in p-adic sense although it need not be maximal in real sense.

See the chapter Negentropy Maximization Principle. See also the article ER=EPR and TGD.



Cloning of maximally negentropic states is possible: DNA replication as cloning of this kind of states?

In Facebook discussion with Bruno Marchal and Stephen King the notion of quantum cloning as copying of quantum state popped up and I ended up to ask about approximate cloning and got a nice link about which more below. From Wikipedia one learns some interesting facts cloning. No-cloning theorem states that the cloning of all states by unitary time evolution of the tensor product system is not possible. It is however possible clone orthogonal basis of states. Does this have some deep meaning?

As a response to my question I got a link to an article of Lamourex et al showing that cloning of entanglement - to be distinguished from the cloning of quantum state - is not possible in the general case. Separability - the absence of entanglement - is not preserved. Approximate cloning generates necessarily some entanglement in this case, and the authors give a lower bound for the remaining entanglement in case of an unentangled state pair.

The cloning of maximally entangled state is however possible. What makes this so interesting is that maximally negentropic entanglement for rational entanglement probabilities in TGD framework corresponds to maximal entanglement - entanglement probabilities form a matrix proportional to unit matrix- and just this entanglement is favored by Negentropy Maximization Principle . Could maximal entanglement be involved with say DNA replication? Could maximal negentropic entanglement for algebraic extensions of rationals allow cloning so that DNA entanglement negentropy could be larger than entanglement entropy?

What about entanglement probabilities in algebraic extension of rationals? In this case real number based entanglement entropy is not maximal since entanglement probablities are different. What can one say about p-adic entanglement negentropies: are they still maximal under some reasonable conditions? The logarithms involved depend on p-adic norms of probabilities and this is in the generic case just inverse of the power of p. Number theoretical universality suggests that entanglement probabilities are of form

Pi= ai/N

with ∑ ai= N with algebraic numbers ai not involving natural numbers and thus having unit p-adic norm.

With this assumption p-adic norms of Pi reduce to those of 1/N as for maximal rational entanglement. If this is the case the p-adic negentropy equals to log(pk) if pk divides N. The total negentropy equals to log(N) and is maximal and has the same value as for rational probabilities equal to 1/N.

The real entanglement entropy is now however smaller than log(N), which would mean that p-adic negentropy is larger than the real entropy as conjectured earlier (see this). For rational entanglement probabilities the generation of entanglement negentropy - conscious information during evolution - would be accompanied by a generation of equal entanglement entropy measuring the ignorance about what the negentropically entangled states representing selves are.

This conforms with the observation of Jeremy England that living matter is entropy producer (for TGD inspired commentary see this). For algebraic extensions of rationals this entropy could be however smaller than the total negentropy. Second law follows as a shadow of NMP if the real entanglement entropy corresponds to the thermodynamical entropy. Algebraic evolution would allow to generate conscious information faster than the environment is polluted, one might concretize! The higher the dimension of the algebraic extension rationals, the larger the difference could be and the future of the Universe might be brighter than one might expect by just looking around! Very consolating! One should however show that the above described situation can be realized as NMP strongly suggests before opening a bottle of champaigne.

The impossibility of cloning of entanglement in the general case makes impossible the transfer of information as any kind of entanglement. Maximal entanglement - and maybe be even negentropic entanglement maximal in p-adic sectors - could however make the communication without damaging the information at the source. Since conscious information is associated with p-adic sectors responsible for cognition, one could even allow the modification of the entanglement probabilities and thus of the real entanglement entropy in the communication process since the maximal p-adic negentropy depends only weakly on the entanglement probabilities.

NE is assigned with conscious experiences with positive emotional coloring: experience of understanding, experience of love, etc... There is an old finnish saying, which can be translated to "Shared joy is double joy!". Could the cloning of NE make possible generation of entanglement by loving attitude so that living entities would not be mere thieves trying to steal NE by killing and eating each other?

For background see the chapter Negentropy Maximization Principle. See also the article Is the sum of p-adic negentropies equal to real entropy?.



Wigner's friend and Schrödinger's cat

I encountered in Facebook discussion Wigner's friend paradox (see this and this). Wigner leaves his friend to the laboratory together with Schrödinger's cat and the friend measures the state of cat: the outcome is "dead" or "alive". Wigner returns and learns from his friend what the state of the cat is. The question is: was the state of cat fixed already earlier or when Wigner learned it from his friend. In the latter case the state of friend and cat would have been superposition of pairs in which cat was alive and friend new this and cat was dead also now friend new this. Entanglement between cat and bottle would have been transferred to that between cat+bottle and Wigner's friend. Recall that this kind of information transfer occur in quantum computation and quantum teleportation allows to transfer arbitrary quantum state but destroys the original.

The original purpose of Wigner was to demonstrate that consciousness is involved with the state function collapse. TGD view is that the state function collapse can be seen as moment consciousness. Or more precisely, self as conscious entity corresponds to the repeated state function reduction sequence to the same boundary of causal diamond (CD). One might say that self is generalized Zeno effect in Zero Energy Ontology (ZEO). The first reduction to the opposite boundary of CD means death of self and re-incarnation at opposite boundary as time reversed self. The experiencet flow of time corresponds to the shift of the non-fixed boundary of self reduction by reduction farther from the fixed boundary - also the state at it changes. Thus subjective time as sequence of reductions is mapped to clock time identifiable as the temporal distance between the tips of CD. Arrow of time is generated but changes in death-reincarnation.

In TGD inspired theory of consciousness the intuitive answerto the question of Wigner looks obvious. If the friend measured the state of cat, it was indeed dead or alive already before Wigner arrived. What remains is the question what it means for Wigner, the "ultimate observer", to learn about the state of the cat from his friend. The question is about what conscious communications are.

Consider first the situation in the framework of standard quantum information theory.

  1. Quantum teleportation could make it possible to transfer arbitrary quantum state from the brain of Wigner's friend to Wigner's brain. Quantum teleportation involves generation of Bell state of qubits assignable with Wigner's friend (A) and Wigner (B).
  2. This quantum state can be constructed by a joint measurement of component of spin in same direction at both A and B. One of the four eigenstates of (by convention) the operator Qz= Jx1)⊗ Jy2)-Jy1)⊗ Jx2) is the outcome. For spinors the actions of Jx and Jy change the sign of Jz eigenvalue so that it becomes possible to construct the Bell states as eigenstates of Qz.
  3. After that Wigner's friend measures both the qubit representing cat's state, which is to be communicated and the qubit at A. The latter measurement does not allow to predict the state at B. Wigner's friend communicates the two bits resulting from this measurement to Wigner classically. On basis of these two classical bits his friend performs some unitary operation to the qubit at his end and transforms it to qubit that was to be communicated.
This allows to communicate the qubit representing measurement outcome (alive/dead). But what about meaning? What guarantees that the meaning of the bit representing the state of the cat is the same for Wigner and his friend? One can also ask how the joint measurement can be realized: its seems to require the presence of system containing A⊗ B. To answer these questions one must introduce some notions of TGD inspired theory of consciousness: self hierarchy and subself=mental image identification.

TGD inspired theory of consciousness predicts that during communication Wigner and his friend form a larger entangled system: this makes possible sharing of meaning. Directed attention means that subject and object are entangled. The magnetic flux tubes connecting the two systems would serve as a correlate for the attention. This mechanism would be at work already at the level of molecular biology. Its analog would be wormholes in ER-EPR corresponence proposed by Maldacena and Susskind. Note that directed attention brings in mind the generation of the Bell entangled pair A-B. It would make also possible quantum teleportation.

Wigner's friend could also symbolize the "pointer of the measurement apparatus" constructed to detect whether cats are dead of alive. Consider this option first. If the pointer is subsystem defining subself of Wigner, it would represent mental image of Wigner and there would be no paradox. If qubit in the brain in the brain of Wigner's friend replaces the pointer of measurement apparatus then during communication Wigner and his friend form a larger entangled system experiencing this qubit. Perhaps this temporary fusion of selves allows to answer the question about how common meaning is generated. Note that this would not require quantum teleportation protocol but would allow it.

For background see the chapter Negentropy Maximization Principle.



Eigenstates of Yangian co-algebra generators as a manner to generate maximal entanglement?

Negentropically entangled objects are key entities in TGD inspired theory of consciousness and in the construction of tensor networks and the challenge is to understand how these could be constructed and what their properties could be. These states are diametrically opposite to unentangled eigenstates of single particle operators, usually elements of Cartan algebra of symmetry group. The entangled states should result as eigenstates of poly-local operators. Yangian algebras involve a hierarchy of poly-local operators, and twistorial considerations inspire the conjecture that Yangian counterparts of super-symplectic and other algebras made poly-local with respect to partonic 2-surfaces or end-points of boundaries of string world sheet at them are symmetries of quantum TGD. Could Yangians allow to understand maximal entanglement in terms of symmetries?

  1. In this respect the construction of maximally entangled states using bi-local operator Qz=Jx⊗ Jy - Jy⊗ Jx is highly interesting since entangled states would result by state function. Single particle operator like Jz would generate un-entangled states. The states obtained as eigenstates of this operator have permutation symmetries. The operator can be expressed as Qz=fzijJi⊗ Jj, where fABC are structure constants of SU(2) and could be interpreted as co-product associated with the Lie algebra generator Jz. Thus it would seem that unentangled states correspond to eigenstates of Jz and the maximally entangled state to eigenstates of co-generator Qz. Kind of duality would be in question.
  2. Could one generalize this construction to n-fold tensor products? What about other representations of SU(2)? Could one generalize from SU(2) to arbitrary Lie algebra by replacing Cartan generators with suitably defined co-generators and spin 1/2 representation with fundamental representation? The optimistic guess would be that the resulting states are maximally entangled and excellent candidates for states for which negentropic entanglement is maximized by NMP.
  3. Co-product is needed and there exists a rich spectrum of algebras with co-product (quantum groups, bialgebras, Hopf algebras, Yangian algebras). In particular, Yangians of Lie algebras are generated by ordinary Lie algebra generators and their co-generators subject to constraints. The outcome is an infinite-dimensional algebra analogous to one half of Kac-Moody algebra with the analog of conformal weight N counting the number of tensor factors. Witten gives a nice concrete explanation of Yangian for which co-generators of TA are given as QA= ∑i<j fABC TBi ⊗ TCj, where the summation is over discrete ordered points, which could now label partonic 2-surfaces or points of them or points of string like object. For a practically totally incomprehensible description of Yangian one can look at the Wikipedia article .
  4. This would suggest that the eigenstates of Cartan algebra co-generators of Yangian could define an eigen basis of Yangian algebra dual to the basis defined by the totally unentangled eigenstates of generators and that the quantum measurement of poly-local observables defined by co-generators creates entangled and perhaps even maximally entangled states. A duality between totally unentangled and completely entangled situations is suggestive and analogous to that encountered in twistor Grassmann approach where conformal symmetry and its dual are involved. A beautiful connection between generalization of Lie algebras, quantum measurement theory and quantum information theory would emerge.

For details see the chapter Holography and Quantum Error Correcting Codes: TGD View or the article with the same title.



Do magnetic monopoles exist?

LNC scientist report that they have discovered magnetic monopoles (see this and this). The claim that free monopoles are discovered is to my opinion too strong, at least in TGD framework.

TGD allows monopole fluxes but no free monopoles. Wormhole throats however behave effectively like monopoles when looked at either space-time sheet, A or B. The first TGD explanation that comes in mind is in terms of 2-sheeted structures with wormhole contacts at the ends and monopole flux tubes connecting the wormhole throats at A and B so that closed monopole flux is the outcome. All elementary particles are predicted to be this kind of structures in the scale of Compton length. First wormhole carries throat carries the elementary particle quantum numbers and second throat neutrino pair neutralizing the weak isospin so that weak interaction is finite ranged. Compton length scales like heff and can be nano-scopic or even large for large values of heff. Also for abnormally large p-adic length scale implying different mass scale for the particle, the size scale increases.

How to explain the observations? Throats with opposite apparent quantized magnetic charges at given space-time sheet should move effectively like independent particles (although connected by flux tube) in opposite directions to give rise to an effective monopole current accompanied by an opposite current at the other space-time sheet. This is like having balls at the ends of very soft strings at the two sheets. One must assume that only the current only at single sheet is detected. It is mentioned that ohmic component corresponds to effectively free monopoles (already having long flux tubes connecting throats with small magnetic string tension). In strong magnetic fields shorter pairs of monopoles are reported to become "ionised" and give rise to a current increasing exponentially as function of square root of external magnetic field strength. This could correspond to a phase transition increasing heff with no change in particle mass. This would increase the length of monopole flux tube and the throats would be effectively free magnetic charges in much longer Compton scale. The space-time sheet at which the throat carrying the quantum numbers of fermion is preferred in the case of elementary fermions.

The analog of color de-confinement comes in mind and one cannot exclude color force since non-vanishing Kähler field is necessarily accompanied by non-vanishing classical color gauge fields. Effectively free motion below the length scale of wormhole contact would correspond to asymtotic freedom. Amusingly, one would have zoomed up representation of dynamics of colored objects! One can also consider interpretation in terms of Kähler monopoles: induced Kähler form corresponds to classical electroweak U(1) field coupling to weak hypercharge but asymptotic freedom need not fit with this interpretation. Induced gauge fields are however strongly constrained: the components of color gauge fields are proportional to Hamiltonians of color rotation and induced Kähler form. Hence it is difficult to draw any conclusions.

See the chapter Criticality and dark matter.



Pear-shaped Barium nucleus as evidence for large parity breaking effects in nuclear scales

Pieces of evidence for nuclear physics anomalies continue to accumulate. Now there was a popular article telling about the discovery of large parity breaking in nuclear physics scale. What have been observed is pear-shaped 144Ba nucleus not invariant under spatial reflection. The arXiv article speaks only about octupole moment of Barium nucleus difficult to explain using existing models. Therefore one must take the popular article managing to associate the impossibility of time travel to the unexpectedly large octupole moment with some caution. As a matter fact, pear-shapedness has been reported earlier for Radon-220 and Radium-224 nuclei by ISOLDE collaboration working at CERN (see this and this).

The popular article could have been formulated without any reference to time travel: the finding could be spectacular even without mentioning the time travel. There are three basic discrete symmetries: C,P, T and their combinations. CPT is belived to be unbroken but C,P, CP and T are known to be broken in particle physics. In hadron and nuclear physics scales the breaking of parity symmetry P should be very small since weak bosons break it and define so short scaled interaction: this breaking has been observed.

The possible big news is following: pear-shaped state of heavy nucleus suggests that the breaking of P in nuclear physics is (much?) stronger than expected. With parity breaking one would expect ellipsoid with vanishing octupole moment but with non-vanishing quadrupole moment. This suggests parity breaking in unexpectedly long length scale. This is not possible in standard model where parity breaking is larger only in weak scale which is roughly 1/1000 of nuclear scale and fourth power of this factor reduces the weak parity breaking effects in nuclear scale.

Does this finding force to forget the plans for the next summer's time travel? If parity breaking is large, one expects from the conservation of CPT also large compensating breaking of CT breaking. This might relate to the matter-antimatter asymmetry of the observed Universe and I cannot relate it to time travel since the very idea of time travel in its standard form does not make much much sense to me.

In TGD framework one can imagine two explanations involving large parity breaking in unexpectedly long scales. In fact, in living matter chiral selection represents mysteriously large parity breaking effect and the proposed mechanisms could be behind it.

  1. In in terms of p-adically scaled down variants of weak bosons having much smaller masses and thus longer Compton length - of the order of nuclear size scale - than the ordinary weak bosons have. After this phase transition weak interaction in nuclear scale would not be weak anymore.
  2. In terms of dark state of nucleus involving magnetic flux tubes with large hbar carrying ordinary weak bosons but with scaled up Compton length (proportional to heff/h=n) of order nuclear size. Also this phase transition would make weak interactions in nuclear scale much stronger.
There is a connection with TGD based explanation of X boson anomaly. The model for the recently reported X boson involves both options but 1) is perhaps more elegant and suggests that weak bosons have scaled down variants even in hadronic scales: the prediction is unexpectedly large parity breaking. This is amusing: large parity breaking in nuclear scales for three decades ago one of the big problems of TGD and now it might have been verified!

See the chapter Nuclear string hypothesis.



Lightnings, dark matter, and lepto-pion hypothesis again

Lightnings have been found to involve phenomena difficult to understand in the framework of standard physics. Very high energy photons, even gamma rays and electrons and positrons with energies in gamma energy range, have been observed.

I learned recently about even more mysterious looking discovery (see this). Physicist Joseph Dwyer from University of New Hampshire and lightning scientists from the University of California at Santa Cruz and Florida Tech describe this discovery in a paper to be published in the Journal of Plasma Physics. In August 2009, Dwyer and colleagues were aboard a National Center for Atmospheric Research Gulfstream V when it inadvertently flew into the extremely violent thunderstorm - and, it turned out, through a large cloud of positrons, the antimatter opposite of electrons, that should not have been there. One would have expected that positrons would have been produced by annihilation of highly energetic gamma rays with energy above .5 MeV but no gamma rays were detected.

This looks rather mysterious from standard physics point of view. There are also earlier strange discoveries related to lightnings.

  1. Lightning strikes release powerful X-ray bursts (see "Lightning strikes release powerful X-ray bursts" ).
  2. Also high energy gamma rays and electrons accompany lightnings (see "Earth creates powerful gamma-ray flashes"). The problem is that electrons should lose their energy while traversing through the atmosphere so that energies in even X ray range would be impossible.
  3. The third strange discovery was made with Fermi telescope (see "Antimatter from lightning flashes the Fermi space telescope"): gamma rays with energies .511 MeV (electron mass) accompany lightnings as if something with mass of 2 electron masses would decay to gamma pairs.
Could TGD explain these findings?
  1. A possible explanation for the finding of Fermi telescope is that in the strong magnetic field of colliding very high energy colliding electrons assignable to the dark magnetic flux tubes of Earth particles that I call electropions suggested by TGD are created (see this). Also evidence for mu-pions and tau-pions exists. They would have mass rather precisely 2 times the mass of electron and would be bound states of color excited electron and positron. Evidence for this kind of states was found already at seventies in heavy ion collisions around Coulomb wall producing electron positron pairs at total energy of 2 times electron mass but since they do not fit at all to the standard physics picture (too large decay width for weak bosons would be predicted) they have been put under the rug, so to say. The paradox is solved if these particles are dark in TGD sense.
  2. If the annihilations of electropions give rise to dark electron-positron pairs and dark gamma rays, which then transform to ordinary particles, one could understand the absence of gamma rays in the situation described by Dwyer et al in terms of too slow transformation to ordinary particles. For instance, the strong electric fields created by a positively charged region of cloud could accelerate electron from both downwards and upwards to this region and lepto-pions would be generated in the strong magnetic fields generating strong electromagnetic instanton density E•B generating lepto-pion coherent state.
  3. But how it is possible to observe gamma rays and ultrahigh energy electrons at the surface of Earth? The problem is that atmosphere is not empty and dissipation would restrict the energies to be much lower than gamma ray energies which are in MeV range. Note that the temperatures in lightning are about 3× 104 K and correspond to electron energy of 2.6 eV which is by a factor 105 smaller than electron mass and gamma ray energy scale! And how the electrons with energies above MeV range are created in thunder cloud? For years ago I proposed a model for high energy gamma rays and electrons associated with lightnings in terms of dark matter identified as heff=n× h phases. This model could provide answer to these questions.
First some background is needed.
  1. I ended up to heff=n× h hypothesis from the observations of Blackman and other pioneers of bio-electromagnetism about quantal effects of ELF em fields to vertebrate brain, which he explained in terms of cyclotron frequencies of Ca++ ion in endogenous magnetic field Bend=0.2 Gauss (2/5:th of the nominal value BE=.5 Gauss of the Earth's magnetic field). Cyclotron energy E= h× f is however extremely low, much below the thermal energy in physiological temperature so that no quantal effects should be possible. This inspired the hypothesis heff=n× h scaling up the energy.
  2. Nottale introduced originally the notion of gravitational Planck constant hgr= GMm/v0 to explain the orbital radii of planets in solar system as Bohr orbits. The velocity parameter v0 is different for inner and outer planets. Quite recently I proposed v0 is in constant ratio to the rotation velocity of the large mass M. The interpretation in TGD framework is that the magnetic flux tubes mediate gravitational interaction between M and m and the value of Planck constant is hgr at them. The proposal heff=hgr at flux tubes is very natural sharpening of the original hypothesis. The predictions of the model do not depend on whether m is taken to be the mass of the planet or any elementary particle associated with it and the gravitational Compton length λgr= GMc/v0 does not depend on the mass of the particle as is proportional to the Schwartschild radius 2GM of Sun.
  3. This hypothesis can be generalized to apply also to Earth (see this). For the strength Bgal∼ 1 nT for galactic magnetic field assumed to mediate Earth's gravitational interaction cyclotron frequency 10 Hz in alpha band is mapped to cyclotron frequency scale of 72 minutes. Scaled EEG range corresponds to cyclotron periods varying up to 12 hours for Bgal. For M= ME and Bgal the cyclotron energy corresponds to about 1 eV at the lower end of visible photon energies.
  4. What about the interpretation of ordinary EEG in terms of cyclotron frequencies assuming that the corresponding energies are in visible and UV range corresponding to the variation of Bend? ME is certainly too large to give a spectrum of cyclotron energies in this range suggested by Blackman to explain the findings about quantal effects of ELF radiation on brain not possible in standard quantum theory because the energy is much below the thermal threshold. MD= .5 × 10-4 ME would be needed. I have proposed that MD corresponds to a mass assignable to a spherical layer at distance of Moon's orbital radius and there are independent pieces of evidence for the existence of this layer. Bend would represent the lower bound for the value range of the magnetic field varying at least by 7 octaves would give the highest UV energies around 124 eV. The transformation of dark photons to ordinary photons would yield biophotons with energies in visible and UV range. Also Bgal would have some variation range.
  5. This has a connection to quantum biology and neuroscience. The proposal is that dark cyclotron photons with energies in visible and UV range associated with flux tubes of magnetic field of appropriate strength serve as a communication tool making biological body (BB) to communicate sensory data to magnetic body (MB) and allow BB to control BB.
Consider now the model for how electrons and gamma rays accompanying lightnings can travel to the surface of Earth without dissipating their energies and how the collisions of electrons with gamma ray energies generating electropions are possible.
  1. What happens if one replaces MD with ME meaning that also Earth's gravitons could reside also at the flux tubes of Bend rather than only those of Bgal? The energies get scale up by a factor ME/M1= 2× 104 and this scales up the 1-100 eV range .02-2 MeV so that also gamma ray energies would be obtained.
  2. The earlier proposal was that electrons and gamma rays associated with lightning arrive to the surface of Earth along dark magnetic flux tubes so that by macroscopic quantum coherence in scale of λgr they do not dissipate their energy.
See the chapter Recent status of leptopion hypothesis of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".



Magnetic body, biophotons, and prediction of scaled variant of EEG

The model for quantum biology relying on the notions of MB and dark matter as hierarchy of phases with heff =nh, and biophotons identified as decay produces of dark photons. The assumption hgr ∝ m becomes highly predictable since cyclotron frequencies would be independent of the mass of the ion.

  1. If dark photons with cyclotron frequencies decay to biophotons, one can conclude that biophoton spectrum reflects the spectrum of endogenous magnetic field strengths. In the model of EEG it has been indeed assumed that this kind spectrum is there: the inspiration came from music metaphors suggesting that musical scales are realized in terms of values of magnetic field strength. The new quantum physics associated with gravitation would also become key part of quantum biophysics in TGD Universe.
  2. For the proposed value of hgr 1 Hz cyclotron frequency associated to DNA sequences would correspond to ordinary photon frequency f=3.6× 1014 Hz and energy 1.2 eV just at the lower limit of visible frequencies. For 10 Hz alpha band the energy would be 12 eV in UV. This plus the fact that molecular energies are in eV range suggests very simple realization of biochemical control by MB. Each ion has its own cyclotron frequency but same energy for the corresponding biophoton.
  3. Biophoton with a given energy would activate transitions in specific bio-molecules or atoms: ionization energies for atoms except hydrogen have lower bound about 5 eV (see this ). The energies of molecular bonds are in the range 2-10 eV (see this ). If one replaces v0 with 2v0 in the estimate, DNA corresponds to .62 eV photon with energy of order metabolic energy currency and alpha band corresponds to 6 eV energy in the molecular region and also in the region of ionization energies.

    Each ion at its specific magnetic flux tubes with characteristic palette of magnetic field strengths would resonantly excite some set of biomolecules.This conforms with the earlier vision about dark photon frequencies as passwords.

    It could be also that biologically important ions take care of their ionization self. This would be achieved if the magnetic field strength associated with their flux tubes is such that dark cyclotron energy equals to ionization energy. EEG bands labelled by magnetic field strengths could reflect ionization energies for these ions.

It must be made clear that TGD has had an interpretational problem related to the identification of biophotons as decay product of dark protons. The resolution of this problem leads to conclusion that both Earth's and galactic MBs control living matter with EEG related by scaling. This would be rather dramatic realization of non-locality.

The problem is following. If one wants bio-photon spectrum to be in visible-UV range assuming that bio-photons correspond to cyclotron photons, one must reduce the value of r=hgrBend/mv0 for Earth particle system by a factor of order k=2× 10-4. r does not depend on the mass of the charged particle. One can replace Bend with some other magnetic field having value which is considerably smaller. One can also increase the value of v0.

  1. For hgr determined by Earth's mass and v0=vrot, where vrot≈ 1.55× 10-6c is the rotation velocity of Earth around its axis and for Bend→ Bgal= 1 nT, where Bgal is typical strength of galactic magnetic field, the energy of dark cyclotron energy is 45 eV (UV extends to 124 eV). This is roughly by a factor 50 higher than the lower bound for the range of bio-photon energies. One possibility is that Bgal defines the upper limit of the dark photon energies and has variation range of at least 7 octaves with lower limit roughly 1/50 nT.

    One can also consider the possibility Bgal defines lower bound for the magnetic field strengths involved and one has v0>vrot. For sun the rotation velocity at Equator is vrot= 2× 10-5 m/s and v0 is v0≈ 5.8× 10-4c. One has v0/vrot≈ 29.0. If same is true in case of Earth, the value of the energy comes down from 25 eV to 1.6 eV which corresponds to visible wave length.

    The assignment of Bgal to gravitational flux tubes is very natural. Now however the frequencies of dark variants of bio-photons would not be in EEG range: 10 Hz frequency would correspond to 5× -4 Hz with period of 42 min. The time scale of 42 min is however very natural concerning consciousness and could be involved with longer bio-rhythms. Scaled EEG spectrum with alpha band around 46 min naturally assignable to diurnal sub-rhythms could be a testable prediction. Natural time would be sidereal (galactic) time with slightly different length of day and this allows a clear test. Recall the mysterious looking finding of Spottiswoode that precognition seems to be enhanced at certain time of sidereal day. Cyclotron frequency 1 Hz would correspond to 7 hours. One can ask whether 12 hours (25) is the natural counterpart for the cyclotron frequency 1 Hz assignable to DNA. This would correspond to lower bound Bgal→ 7Bgal/12 ≈ .58 nT or to v0→ 1.7v0.

  2. The idea has been that it is dark EEG photons, which correspond to bio-photons. Could one assign bio-photons also to dark EEG so that magnetic fields of Earth and galaxy would correspond to two different control levels? If Bend=.2 Gauss is assumed to determine the scale of the magnetic field associated with the flux tubes carrying gravitational flux tubes, one must reduce hgr. The reduction could be due to M→ MD=kM and due to the change of v0. k could characterize the dark matter portion of Earth but this assumption is not necessary.

    This would require k=Mdark, E/ME≈ 5× 10-5 if one does not change the value of v0. This value of k equals to the ratio of Bgal/Bend and would be 1/4:th of k=2× 10-4. One might argue that it is indeed dark matter to which the gravitational flux tubes with large value of Planck constant connect biomatter.

  3. Suppose that one does not give up the idea that also Earth mass gives rise to hgr and scaled analog of EEG. Then MD must correspond to some mass distinguishable from and thus outside Earth. The simplest hypothesis is that a spherical layer around Earth is in question. TGD based model for spherical objects indeed predict layered structures. There are two separate anomalies in the solar system supporting the existence of a spherical layer consisting of dark mass and with radius equal to the distance of Moon from Earth equal to 60.3 Earth radii. The first anomaly is so called Flyby anomaly and second one involves a periodic variation of both the value of the measured Newton's constant at the surface of Earth and of the length of the day. The period is about 6 years and TGD predicts it correctly.

    One can imagine that dark particles reside at the flux tubes connecting diametrically opposite points of the spherical layer. Particles would experience the sum of gravitational forces summing up to zero in the center of Earth. Although the layer would be almost invisible (or completely invisible by argument utilizing the analogy with conducting shell) gravitationally in its interior, hgr=MDm/v0 would make itself visible in the dynamics of dark particles! This layer could represent magnetic Mother Gaia and EEG would take care of communications to this layer.

    The rotation velocity vrot,M≈ 2.1× vrot,E of Moon around its axis is the first guess for the parameter v0 identifiable perhaps as rotation velocity of the spherical layer. A better guess is that the ratio r=v0/vrot,M is the same as for Sun and as assumed above for Earth. This would give for the ratio of cyclotron frequency scales r= (Bend/Bgal)× 2.1. 66.7 min, which corresponds to Bgal= .63 nT, would correspond to .1 s. For this choice 1 Hz DNA cyclotron frequency would correspond 11.7 h rather near to 12 h. This encourages the hypothesis that 72 min is the counterpart of .1 s cyclotron time. The cyclotron time of DNA (very weakly dependent on the length of DNA double strand) in Bgal (or its minimum value) would be 12 h.

Magnetic body of Earth controlling bio-dynamics would be a dramatic manifestation of non-locality to say nothing about the control performed by galactic magnetic body. MD would be associated with the magnetic Mother Gaia making life possible at Earth together with magnetic Mother Galactica. Both MBs would be in continual contact with biomolecules like ATP and the molecules for which ATP attaches or provides the phospate. Metabolic energy would be used to this process. These MBs would be Goddesses directing its attention to tiny bio-molecules. If this picture is correct, the ideas about consciousness independent on material substrate and assignable to a running computer program can be safely forgotten.

See the new chapter Quantum criticality and dark matter or the article Non-locality in quantum theory, in biology and neuroscience, and in remote mental interactions: TGD perspective.



Comparing TGD view about quantum biology with McFadden's views

McFadden has very original view about quantum biology: I have written about his work for the first time for years ago, much before the emergence of ZEO, of the recent view about self as generalized Zeno effect, and of the understanding the role of magnetic body containing dark matter (see this). The pleasant surprise was that I now understand McFadden's views much better from TGD viewpoint.

  1. McFadden sees decoherence as crucial in biological evolution: here TGD view is diametric opposite although decoherence is a basic phenomenon also in TGD.
  2. McFadden assumes quantum superpositions of different DNAs. To me this looks an unrealistic assumption in the framework of PEO. In ZEO it is quite possible option.
  3. McFadden emphasizes the importance of Zeno effect (in PEO). In TGD the ZEO variant of Zeno effect is central for TGD inspired theory of consciousness and quantum biology. Mc Fadden suggests that quantum effects and Zeno effect are central in bio-catalysis: the repeated measurement keeping reactants in the same position can lead to an increase of reaction rate by factors of order billion. McFadden describe enzymes as quantum mousetraps catching the reactants and forcing them to stay in same position. The above description for how catalysis catches the reactants using U-shaped flux tube conforms with mousetrap picture.

    McFadden discusses the action of enzymes in a nice manner and his view conforms with TGD view. In ZEO the system formed by catalyst plus reactants could be described as a negentropically entangled sub-self, and self indeed corresponds to a generalized Zeno effect. The reactions can proceed in shorter scales although the situation is fixed in longer scales (hierarchy of CDs): this would increase the length of the period of time during which reactions can proceed and lead to catalytic effect. Zeno effect in ZEO plus hierarchies of selves and CDs would be essentially for the local aspects of enzyme action.

  4. Protons associated with hydrogen bonds and electronic Cooper pairs play a universal role in McFadden's view and the localization of proton in quantum measurement of its position to hydrogen bond is the key step of enzyme catalysis. Also TGD dark protons at magnetic flux tubes giving rise to dark nuclear strings play a key role. For instance, McFadden models enzyme catalysis as injection of proton to a very special hydrogen bond of substrate. In TGD one has dark protons at magnetic flux tubes and their injection to a properly chosen hydrogen bond and transformation to ordinary proton is crucial for the catalysis. Typical places for reactions to occur are C=O type bonds, where the transition to C-OH can occur and would involve transformation of dark proton to ordinary proton. The transformation of dark proton to ordinary one or vice versa in hydrogen bonds would serve as a biological quantum switch allowing magnetic body to control biochemistry very effectively.

    What about electronic Cooper pairs assumed also by McFadden? They would flow along the flux tube pairs. Can Cooper pairs of electrons and dark protons reside at same flux tubes? In principle this is possible although I have considered the possibility that particles with different masses (cyclotron frequencies) reside at different flux tubes. For hgr =heff this would make possible both frequency and energy resonance for cyclotron transitions.

McFadden has proposed quantum superposition for ordinary codons: This does not seem to make sense in PEO since the chemistries of codons are different) but could make sense in ZEO. In TGD one could indeed imagine quantum entanglement (necessary negentropic in p-adic degrees of freedom) between dark codons. This NE could be either between additional degrees of freedom or between spin degrees of freedom determining the dark codons. In the latter case complete correlation between dark and ordinary DNA codons would imply also the superposition of their tensor products with ordinary codons.

The NE between dark codons could also have a useful function: it could determine physically gene as a union of disjoint mutually entangled portions of DNA. Genes are known to be highly dynamical units, and after pre-transcription splicing selects the portions of the transcript translated to protein. The codons in the complement of the real transcript are called introns and are spliced out from mRNA after the pre-transcription (see this).

What could be the physical criterion telling whether a given codon belongs to exonic or intronic portion of DNA? A possible criterion distinguish between exons and introns is that exons have NE between themselves and introns have no entanglement with exons (also exons could have NE between themselves). Introns would not be useless trash since the division into exonic and exonic region would be dynamical. The interpretation in terms of TGD inspired theory of consciousness is that exons correspond to single self.

An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear string model and in the article About physical representations of genetic code in terms of dark nuclear strings.



Is bio-catalysis a shadow of dark bio-catalysis based on generalization of genetic code?

Protein catalysis and reaction pathways look extremely complex (see this) as compared to replication, transcription, translation, and DNA repair. Could simplicity emerge if biomolecules are identified as chemical shadows of objects formed from dark nuclear strings consisting of dark nucleon triplets and their dynamics is shadow of dark stringy dynamics very much analogous to text processing?

What if bio-catalysis is induced by dark catalysis based on reconnection as recognition mechanism? What if contractions and expansions of U-shaped flux tubes by heff increasing phase transitions take that reactants find each other and change conformations as in the case of opening of DNA double strand? What if codes allowing only the dark nucleons with same dark nuclear spin and flux tubes spin to be connected by a pair of flux tubes?

This speculation might make sense! The recognition of reactants is one part of catalytic action. It has been found in vitro RNA selection experiments that RNA sequences are produced having high frequency for the codons which code for the amino-acid that these RNA molecules recognize (this). This is just what the proposal predicts!

Genetic codes DNA to RNA as 64→ 64 map, RNA to tRNA as 64→ 40, tRNA to amino-acids with 40→ 20 map are certainly not enough. One can however consider also additional codes allowed by projections of (4⊕ 21⊕ 22) ⊗ (5⊕ 3 (⊕ 1)) to lower-dimensional sub-spaces defined by projections preserving spins. One could also visualize bio-molecules as collections of pieces of text attaching to each other along conjugate texts. The properties of catalysts and reactants would also depend by what texts are "visible" to the catalysts. Could the most important biomolecules participating biochemical reactions (proteins, nucleic acids, carbohydrates, lipids, primary and secondary metabolites, and natural products, see this) have dark counterparts in these sub-spaces.

The selection of bio-active molecules is one of the big mysteries of biology. The model for the chemical pathway leading to the selection of purines as nucleotides (see this) assumes that the predecessor of purine molecule can bind to dark proton without transforming it to ordinary proton. A possible explanation is that the binding energy of the resulting bound state is higher for dark proton than the ordinary one. Minimization of the bound state energy could be a completely general criterion dictating which bio-active molecules can pair with dark protons. The selection of bio-active molecules would not be random after all although it looks so. The proposal for DNA-nuclear/cell membrane as topological quantum computer with quantum computations coded by the braiding of magnetic flux tubes connecting nucleotides to the lipids wlead to the idea that flux tubes being at O=-bonds (see this).

An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear string model and in the article About physical representations of genetic code in terms of dark nuclear strings.



Are sound-like bubbles whizzing around in DNA are essential to life?

I got a link to a very interesting article about sound waves in DNA (see this). The article tells about THz de-localized modes claimed to propagate forth and back along DNA double strand somewhat like bullets. These modes involve collective motion of many atoms. These modes are interpreted as a change in the stiffness of the DNA double strand leading to the splitting of hydrogen bonds in turn leading to a splitting into single strands. The resulting gap is known as transcriptional bubble propagating along double strand is the outcome. I do not how sound the interpretation as sound wave is.

It has been proposed that sound waves along DNA give rise to the bubble. The local physical properties of DNA double strand such as helical structure and elasticity affect the propagation of the waves. Specific local sequences are proposed to favor a resonance with low frequency vibrational modes, promoting the temperary splitting of the DNA double strand. Inside the bubble the bases are exposed to the surrounding solvent, which has two effects.

Bubbles expose the nucleic acid to reactions of the bases with mutagens in the environment whereas so called molecular intercalators may insert themselves between the strands of DNA. On the other hand, bubbles allow proteins known as helicases to attach to DNA to stabilize the bubble, followed by the splitting the strands to start the transcription and replication process. The splitting would occur at certain portions of DNA double strand. For this reason, it is believed that DNA directs its own transcription.

The problem is that the strong interactions with the surrounding water are expected to damp the sound wave very rapidly. Authors study experimentally the situation and report that propagating bubbles indeed exist for frequencies in few THz region. Therefore the damping deo not seem to be effective. How this is possible? As an innocent layman I also wonder how this kind of mechanism can be selective: it would seem that the bullet like sound wave initiates transcription at many positions along DNA. The transcription should be localized to a region assignable to single gene. What could guarantee this?

Can TGD say anything interesting about the mechanism behind transcription and replication?

  1. In TGD magnetic body controls and coordinates the dynamics. The strongest hypothesis is that basic biochemical process are induced by those for dark variants of basic bio-molecules (dark variants of DNA, enzymes,...). The belief that DNA directs its own transcription translates to the statement that the dark DNA consisting most plausibly from sequences of dark proton triplets ppp at dark magnetic flux tubes controls the transcription: the transcription/replication at the level of dark DNA induces that at the level of ordinary DNA.
  2. If the dark DNA codons represented as dark proton triplets (ppp) are connected by 3 flux tube pairs, the reverse of the reconnection should occur and transform flux tube pairs to two U-shaped flux tubes assignable to the two dark DNA strands. Dark proton sequences have positive charge +3e per dark codon giving rise to a repulsive Coulomb force between them. There would be also an attractive force due to magnetic tension of the flux tubes. These two forces would compensate each other in equilibrium (there also the classical forces due to the negatively charged phosphates associated with nucleotides but these would not be so important).

    If the flux tube pairs are split, the stabilizing magnetic force however vanishes and the dark flux tubes repel each other and force the negatively charged DNA strands to follow so that also ordinary DNA strand splits and bubble is formed. The primary wave could therefore be the splitting of the flux tube pairs: whether one can call it as a sound wave is not clear to me. Perhaps the induced propagating splitting of ordinary DNA double strand could be regarded as an analog of sound wave.

    The splitting of flux tube pairs for a segment of DNA would induces a further splitting of flux tubes since repulsive Coulomb force tends to drive the flux tubes further away. The process could be restricted to DNA if the "upper" end of the split DNA region has some dark DNA codons which are not connected by flux tubes pairs. This model reason why for dark proton sequences.

  3. This model does not yet explain how the propagating splitting wave is initiated. Could a quantum phase transition increasing the value of heff associated with the flux tube pairs occur for some minimal portion of dark DNA "below the region associated with gene and lead to the propagating wave induced by the above classical mechanism? That the wave propagates in one direction only could be due to chirality of DNA double helix.
An interesting question is how the RNA world vision relates to this general picture.
  1. There are strong conditions on the predecessor of DNA and RNA satisfies many of them: reverse transcription to DNA making possible transition to DNA dominated era is possible. Double stranded RNA exists in cells and makes possible RNA genome: this would however suggest that cell membrane came first. RNA is a catalyst. RNA has ability to conjugate an amino-acid to the 3' end of RNA and RNA catalyzes peptide bond formation essential for translation. RNA can self-replicate but only relatively short sequences are produced.
  2. TGD picture allows to understand why only short sequences of RNA are obtained in replication. If the replication occurs at the level of dark ppn sequences as it would occur for DNA in TGD framework, long RNA sequences might be difficult to produce because of the stopping of the propagation of the primary wave splitting the flux tube pairs. This could be due to the neuron pairs to which there is associated no Coulomb repulsion essential for splitting.
  3. In TGD framework RNA need not be the predecessor of DNA since the evolution would occur at the level of dark nucleon strings and DNA as the dark proton string is the simpest dark nucleon string and might have emerged first. Dark nuclear strings would have served as templates and biomolecules would have emerged naturally via the transcription of their dark counterparts to corresponding bio-polymers.
An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear string model and in the article About physical representations of genetic code in terms of dark nuclear strings.



Could dark DNA, RNA, tRNA and amino-acids correspond to different charge states of codons?

If dark codons correspond to dark nucleon triplets as assumed in the following considerations there are 4 basic types of dark nucleon triplets: ppp,ppn, pnn, nnn. Also dark nucleons could represent codons as uuu,uud,udd,ddd: the following discussion generalizes as such also to this case. If strong isospin/em charge decouples from spin the spin content is same independently of the nucleon content. One can consider the possibility of charge neuralization by the charges assignable to color flux tubes but this is not necessarily. In any case, one would have 4 types of nucleon triplets depending on the values of total charges.

Could different dark nucleon total charges correspond to DNA,RNA, tRNA and amino-acids? Already the group representation content - perhaps correlating with quark charges - could allow to distinguish between DNA, RNA, tRNA, and amino-acids. For amino-acids one would have only 4× 5 and ordinary statistics and color singlets. For DNA and RNA one would have full multiplet also color non-singlets and for tRNA one could consider (4⊕ 21⊕ 22)× 5 containing 40 states. 31 is the minimum number of tRNAs for the realization of the genetic code. The number of tRNA molecules is known to be between 30-40 in bacterial cells. The number is larger in animal cells but this could be due to different chemical representations of dark tRNA codons.

If the net charge of dark codon distinguishes between DNA,RNA, tRNA, and amino-acid sequences, the natural hypothesis to be tested is that dark ppp, ppn, pnn, and nnn sequences are accompanied by DNA,RNA, tRNA, and amino-acid sequences. The dark beta decays of dark protons proposed to play essential role in the model of cold fusion could transform dark protons to dark neurons. Peptide backbones are neutral so that dark nnn sequence could be also absent but the dark nnn option is more natural if the general vision is accepted.

Is this picture consistent with what is known about charges of amino-acids DNA,RNA, tRNA, and amino-acids?

  1. DNA strand has one negative charge per nucleotide. Also RNA molecule has high negative charge. This conforms with the idea that dark nucleons accompany both DNA and RNA. DNA codons could be accompanied by dark ppp implying charge neutralization in some scale and RNA codons by dark ppn. The density of negative charge for RNA would be 2/3 for that for DNA.
  2. Arg, His, and Lys have positively charged side chains and Asp,Glu negative side chains (see (see this). The charge state of amino-acid is sensitive to the pH value of solution and its conformation is sensitive to the counter ions present. Total charge for amino-acid in peptide however vanishes unless it is associated with the side chain: as in the case of DNA and RNA it is the backbone whose charge is expected to matter.
  3. Amino-acid has central C atom to which side chain, NH2, H and COOH are attached. For free amino-acids in solution water solution NH2→ NH3+ tends to occur pH=2.2 by receiving possibly dark proton whereas COOH tends to become negatively charged above pH= 9.4 by donating proton, which could become dark. In peptide OH attach to C and one H attached to N are replaced with peptide bond. In the pH range 2.2-9.4 amino-acid is zwitterion for which both COOH is negatively charged and NH2 is replaced with NH3+ so that the net charge vanishes. The simplest interpretation is that the ordinary proton from negatively ionized COOH attaches to NH2 - maybe via intermediate dark proton state.
  4. The backbones of peptide chains are neutral. This conforms with the idea that dark amino-acid sequence consists of dark neutron triplets. Also free amino-acids would be accompanied by dark neutron triplets. If the statistics is ordinary only 4 dark nnn states are possible as also 5 dark color flux tube spin states.
  5. tRNA could involve dark pnn triplet associated with the codon. An attractive idea is secondary genetic code assigning RNA codons to tRNA-amino-acid complex and projecting 8⊗ (5⊕ 3) containing 64 dark RNA spin states to 8⊗ 5 containing 40 dark tRNA spin states with same total nucleon and flux tube spins. Dark tRNA codons would in turn be attached to dark amino-acids by a tertiary genetic code projecting spin states 8⊗ 5 to 4⊗ 5 by spin projection. In the transcription dark tRNA would attach to dark mRNA inducing attachment of dark amino-acid to the growing amino-acid sequence and tRNA having only dark tRNA codon would be left. The free amino-acids in the water solution would be mostly charged zwitterions in the pH range 2.2-9.4 and the negative charge of COO- would be help in the attachement of the free amino-acid to the dark proton of tRNA codon. Therefore also the chemistry of free amino-acids would be important.

    An interesting question is why pnn triplets for tRNA would only 5 in flux tube degrees of freedom entire 8 in nucleon degrees of freedom. For RNA consisting of ppn triplets also 3 would be possible. What distinguishes between ppn and pnn?

    An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear String Model and in the article About physical representations of genetic code in terms of dark nuclear strings.



About physical representations of genetic code in terms of dark nuclear strings

The standard view about evolution as a random process suggests that genetic code is pure accident. My own view is that something so fundamental as life cannot be based on pure randomness. TGD has led to several proposals for genetic code, its emergence, and various realizations based on purely mathematical considerations or inspired by physical ideas. One can argue that genetic code is realized in several manners just like bits can be represented in very many manners. Two especially interesting proposals have emerged. The first one is based on geometric model of music harmony involving icosahedral and tetrahedral geometries. Second one having two variants is based on dark nuclear strings. Both models predict correctly the numbers of DNA codons coding for a given amino-acid.

An updated nuclear string variant is summarized and also its connection with the model of harmony is discussed in chapter Nuclear String Model and in the article About physical representations of genetic code in terms of dark nuclear strings.



Is the sum of p-adic negentropies equal to real entropy?

I ended almost by accident to a fascinating and almost trivial theorem. Adelic theorem for information would state that conscious information represented as sum of p-adic negentropies (entropies, which are negative) is equal to real entropy. The more conscious information, the larger the chaos in the environment as everyone can verify by just looking around;-)

This looks bad! Luckily, it turned out that this statement is true for rational probabilities only. For algebraic extensions it cannot be true as is easy to see. That negentropic entanglement is possible only for algebraic extensions of rationals conforms with the vision that algebraic extensions of rationals characterize evolutionary hierarchy. The rationals represent the lowest level at which there either conscious information vanishes or if equal to p-adic contribution to negentropy is companied by equally large real entropy.

It is not completely obvious that the notion of p-adic negentropy indeed makes sense for algebraic extensions of rationals. A possible problem is caused by the fact that the decomposition of algebraic integer to primes is not unique. Simple argument however strongly suggests that the various p-adic norms of the factors do not depend on the factorization. Also a formula for the difference of the total p-adic negentropy and real entropy is deduced.

p-Adic contribution to negentropy equals to real entropy for rational probabilities but not for algebraic probabilities

The following argument shows that p-adic negentropy equals to real entropy for rational probabilities.

  1. The fusion of real physics and various p-adic physics (identified as correlates for cognition, imagination, and intentionality) to single coherent whole leads to what I call adelic physics. Adeles associated with given extension of rationals are Cartesian product of real number field with all p-adic number fields extended by the extension of rationals. Besides algebraic extensions also the extension by any root of e is possible since it induces finite-dimensional p-adic extension. One obtains hierarchy of adeles and of corresponding adelic physics interpreted as an evolutionary hierarchy.

    An important point is that p-adic Hilbert spaces exist only if one restricts the p-adic numbers to an algebraic extension of rationals having interpretation as numbers in any number field. This is due to the fact that sum of the p-adic valued probabilities can vanish for general p-adic numbers so that the norm of state can vanish. One can say that the Hilbert space of states is universal and is in the algebraic intersection of reality and various p-adicities.

  2. Negentropy Maximization Principle (NMP) is the variational principle of consciousness in TGD framework reducing to quantum measurement theory in Zero Energy Ontology assuming adelic physics. One can define the p-adic counterparts of Shannon entropy for all finite-dimensional extensions of p-adic numbers, and the amazing fact is that these entropies can be negative and thus serve as measures for information rather than for lack of it. Furthermore, all non-vanishing p-adic negentropies are positive and the number of primes contributing to negentropy is finite since any algebraic number can be expressed using a generalization of prime number decomposition of rational number. These p-adic primes characterize given systen, say elementary particle.

    NMP states that the negentropy gain is maximal in the quantum jump defining state function reduction. How does one define the negentropy? As the sum of p-adic negentropies or as the sum of real negative negentropy plus the sum of p-adic negentropies? The latter option I proposed for some time ago without checking what one obtains.

  3. The adelic theorem says that the norm of rational number is equal to the product of the inverses of its p-adic norms. The statement that the sum of real and p-adic negentropies is zero follows more or less as a statement that the logarithms of real norm and the product of p-adic norms for prime factors of rational sum up to zero.

    The core formula is adelic formula stating that the real norm of rational number is product of its p-adic norms. This implies that the logarithm of the rational number is sum over the logarithms of its p-adic norms. Since in p-adic entropy assigned to prime p logarithms of probabilities are replaced by their p-adic norms, this implies that for rational probabilities the real entropy equals to p-adic negentropy. If real entropy is counted as conscious information, the negentropy vanishes identically for rational probabilities.

    It would seem that the negentropy appearing in the definition of NMP must be the sum of p-adic negentropies and real entropy should have interpretation as a measure for ignorance about the state of either entangled system. The sum of p-adic negentropies would serve as a measure for the information carried by a rule with superposed state pairs representingt the instances of the rule. The information would be conscious information and carried by the negentropically entangled system.

  4. What about probabilities in algebraic extensions? The probabilities are now algebraic numbers. Below an argument is develoed that the p-adic norms of of probabilities are uniquely defined and are always powers of primes so that the adelic formula cannot be true since on the real side one has logarithms of algebraic numbers and on the p-adic side only logarithms of primes.

    What could be the interpretation?

    1. If conscious information corresponds to N-P, it accompanies the emergence of algebraic extensions of rationals at the level of Hilbert space.
    2. If N corresponds to conscious information, then at the lowest level conscious information is necessary accompanied by entropy but for algebraic extensions N-P could be positive since N is maximized. This option looks more plausible.
    One however expects that the value of real entropy correlates strongly with the value of negentropy. One expects that the value of real entropy correlates strongly with the value ofp-adic total negentropy. This would conform with the observation that large entropy seems to be a prerequisite for life by providing large number of states with degenerate energies providing large representative capacity. For instance, Jeremy England has made this proposal: I have commented this proposal from TGD point of view.

Formula for the difference of total p-adic negentropy and real entanglement entropy

In the following some non-trivial details related to the definition of p-adic norms for the rationals in the algebraic extension of rationals are discussed.

The induced p-adic norm Np(x) for n-dimensional extension of Q is defined as the determinant det(x) of the linear map defined by multiplication with x. det(x) is rational number. The corresponding p-adic norm is defined as the n:th root Np(det(x))1/n of the ordinary p-adic norm. Root guarantees that the norm co-incides with the ordinary p-adic norm for ordinary p-adic integers. One must perform now a factorization to algebraic primes. Below an argument is given that although the factorization to primes is not always unique, the product of p-adic norms for given algebraic rational defined as ratio of algebraic integers is unique.

Can one write an explicit formula the difference of total p-adic entanglement negentropy (positive) and real entanglement entropy using prime factorization in finite dimensional algebraic extension (note that for algebraic numbers defining infinite-dimensional extension of rationals factorization does not even exist since one can write a=a1/2a1/2=...)? This requires that total p-adic entropy is uniquely defined. There is a possible problem due to the non-uniqueness of the prime factorization.

  1. For Dedekind rings, in particular rings of integers, there exists by definition a unique factorization of proper ideals to prime ideals (see this). In contrast, the prime factorization in the extensions of Q is not always unique. Already for Q((-5)1/2) one has 6=2× 3= (1+(-5)1/2)(1-(-5)1/2) and the primes involved are not related by multiplication with units.

    Various factorizations are characterized by so called class group and class field theory is the branch of number theory studying factorizations in algebraic extensions of integer rings. Factorization is by definition unique for Euclidian domains. Euclidian domains allow by definition so called Euclidian function f(x) having values in R+ with the property that for any a and b one has either a=qb or a= qb+r with f(r)<f(b). It seems that one cannot restrict to Euclidian domains in the recent situation.

  2. Even when the factorization in the extension is not unique, one can hope that the product of various p-adic norms for the factors is same for all factorizations. Since the p-adic norm for the extensions of primes is induced by ordinary p-adic number this requires that the p-adic prime for which the induced p-adic norm differs from unity are same for all factorizations and that the products of p-adic norms differing from unity are same. This independence on the representative for factorization would be analogous to gauge invariance in physicist's conceptualization.

    The probabilities Pk belongs to a unique product of ideals labelled by primes of extension. The ideals are characterized by norms and if this norm is product of p-adic norms for any prime factorization as looks natural then the independence on the factorization follows. Number theorist can certainly immediately tell whether this is true. What is encouraging that for Q((-5)1/2) z=x+(-5)1/2y has determinant det(z)=x2+5y2 and for z==1+/- (-5)1/2 one has has det(z)=6 so that for the products of p-adic norms for the factorizations 6=2× 3 and (1+(-5)1/2)(1-(-5)1/2) are equal.

  3. If this guess is true, one can write the the difference of total p-adic negentropy N and real entanglement entropy S as

    N-S= ∑ Pk log(Pk/∏p Np(Pk)) .

    Here ∏p Np(Pk) would not depend on particular factorization. The condition ∑ Pk=1 poses an additional condition. It would be nice to understand whether N-S≥ 0 holds true generally and if not, what are the conditions guaranteeing this. The p-adic numbers of numerators of rationals involved give positive contributions to N-S as the example Pk=1/N in rational case shows.

For background see the chapter Negentropy Maximization Principle.



X boson as evidence for nuclear string model

Anomalies seem to be popping up everywhere, also in nuclear physics and I have been busily explaining them in the framework provided by TGD. The latest nuclear physics anomaly that I have encountered was discovered in Hungarian physics laboratory in the decays of the excited state 8Be* of an unstable isotope of 8Be (4 protons and 4 neutrons) to ground state 8Be (see this). For the theoretical interpretation of the finding in terms of fifth force mediated by spin 1 boson see this.

The anomaly manifests itself as a bump in the distribution of e+e- pairs in the transitions 8Be*→ 8Be at certain angle between electrons. The theoretical interpretation is in terms of a production of spin 1 boson - christened as X - identified as a carrier of fifth force with range about 12 fm, nuclear length scale. The attribute 6.8σ tells that the probably that the finding is statistical fluctuation is about 10-12: already 5 sigma is regarded as a criterion for discovery.

The assumption about vector boson character looks at first well-motivated: the experimental constraints for the rate to gamma pairs eliminate the interpretation as pseudo-scalar boson whereas spin 1 bosons do not have these decays. In the standard reductionistic spirit it is assumed that X couples to p and n and the coupling is sum for direct couplings to u and d quarks making proton and neutron. The comparison with the experimental constraints forces the coupling to proton to be very small: this is called protophoby. Perhaps it signifies something that many of the exotic particles introduced to explain some bump during last years are assumed to suffer various kinds of phobies. The assumption that X couples directly to quarks and therefore to nucleons is of course well-motivated in standard nuclear physics framework relying on reductionism.

Two observations, and the problems created by them

TGD inspired interpretation based on nuclear string model is based on two observations and potential problems created by them.

  1. The first observation is that 12 fm range corresponds rather precisely to p-adic length scale for prime p≈ 2k, k=113 assigned to the space-time sheets of atomic nuclei in TGD framework. The estimate comes from L(k)= 2(k-151)/2L(151), L(151) ≈ 10 nm. To be precise, this scale is actually the p-adic Compton length of electron if it where characterized by k instead of k0=127 labelling the largest not super-astrophysical Mersenne prime. k=113 is very special: it labels Gaussian Mersenne prime (1+i)k-1 and also muonic space-time sheet.
  2. A related observation made few days later is that the p-adic scaling of the ordinary neutral pion mass 135 MeV from k=107 to k=113 by 2-(113-107)/2=1/8 gives 16.88 MeV! That p-adic length scale hypothesis would predict the mass of X with .7 per cent accuracy is hardly an accident. This would strongly suggest that X boson is k=113 pion.
  3. There is however a problem. The decays to photon pairs producing pion in l=1 partial wave have not been however observed. This creates puzzle. If X is ρ meson like state with spin 1, it should have same mass as pionic X? This is not plausible.
The pleasant surprise was that the scaled Γ(π→ γγ) turned out to be consistent with the experimental bounds reported in the article! I must admit that it took almost two weeks to realize that the conclusion of the authors based on the limits on gamma pair decay was wrong in TGD framework.

There is however a problem: the estimate for Γ(π, e+e-) obtained by p-adically scaling the model based on decay virtual gamma pair decaying to e+e- pair is by a factor 1/88 too low. One can consider the possibility that the dependence of fπ on p-adic length scale is not the naively expected one but this is not an attractive option. The increase of Planck constant seems to worsen the situation.

The dark variants of weak bosons appear in important role in both cold fusion and TGD inspired model for chiral selection. They are effectively massless below the scaled up Compton scale of weak bosons so that weak interactions become strong. Since pion couples to axial current, the decay to e+e- could proceed via annihilation to Z0 boson decay to e+e- pair. The estimate for Γ(π(113), e+e-) is in the middle of allowed range. The success suggests that the couplings of mesons to p-adically scaled down weak bosons could describe semileptonic decays of hadrons and explain the somewhat mysterious origin of CVC and PCAC.

Effective action approach

One must construct the effective action for the process using relativistic approach and Poincare invariance.

  1. The effective action in the fifth force proposal involves the term giving rise to the decay Z→ 8Be+X. 8Be*==Z is treated as effective U(1) gauge field Zαβ = ∂αZβ - ∂βZα expressible in terms of vector potential Zα. The corresponding term in the effective action density is proportional to εαβγδ ZαβγX ∂δY. Here X is pseudoscalar meson and 8Be==Y scalar. The coupling constant parameter has dimensions of length squared.
  2. In the recent case a reduction to the level of single color bond takes place so that Zαβ is replaced with ραβ representing spin 1 colored bond and pseudo-scalar X with the colored analog of π(113).
  3. Second term in the effective action describes decays of pseudoscalar X to gamma pair and electron pairs. The scaling of standard model produces gamma+gamma decay rate below the experimental upper limit. The scaling of standard model produces a rate which is by a factor about 1/100 too small. The description in terms of coupling to p-adically scaled down variant of Z boson via axial current leads to a prediction consistent with experimental limits. Also the dark variant of Z boson can be considered as a model but now the rate is by order of magnitude smaller than the lower limit proposed in the article. Also the decay of ordinary pion could proceed by same mechanism.

Model for color bonds of nuclear strings

One should also construct a model for color bonds connecting nucleons to nuclear strings.

  1. In nuclear string model nuclei are identified as nuclear strings with nucleons connected by color flux tubes, which can be neutral or charged and can have net color so that color confinement would be in question in nuclear length scale. The possibility of charged color flux tubes predicts the existence of exotic nuclei with some neutrons replaced by proton plus negatively charged color flux tube looking like neuron from the point of view of chemistry or some protons replaced with neutron plus positively charged flux tube. Nuclear excitation with energy determined buy the difference of initial and final color bond energies is in question.
  2. The color magnetic flux tubes are analogous to mesons of hadron physics except that they can be colored and are naturally pseudo-scalars in the ground state. These pion like colored flux tube can be excited to a colored state analogous to ρ meson with spin 1 and net color. Color bonds would be rather long flux loops with size scale determined by the mass scale of color bond: 17 MeV gives estimate which as electron Compton length divided by 34 and would correspond to p-adic length scale k=121>113 so that length would be about 2(121-113)/2=16 times longer than nuclear length scale.
  3. If the color bonds (cb) are indeed colored, the mass ratio m(ρ,cb)/m(π,cb) need not be equal to m(ρ,107)/m(π,107)=5.74. If the ρ and π type closed string states are closed string like objects in the sense as elementary particles are so that there is a closed magnetic monopole flux tube along first sheet going through wormhole contact to another space-time sheet and returning back, the scaling m(ρ/π,107)/m(ρ/π,113)= 8 should hold true.

Model for 8Be* → 8Be +X

With these ingredients one can construct a model for the decay 8Be* → 8Be +X.

  1. 8Be* could correspond to a state for which pionic color(ed) bond is excited to ρ type color(ed) bond. The decay of 8Be* → 8Be +X would mean a snipping of a color singlet π meson type closed flux tube from the color bond and leaving pion type color bond. The reaction would be analogous to an emission of closed string from open string. m(X)=17 MeV would be the mass of the color-singled closed string emitted equal to m(π,113)=17 MeV. The emitted π would be in l=1 partial wave so that resonant decay to gamma pair would not occur but decay to e+e- pairs is possible just like for the ordinary pion.
  2. Energy conservation suggests the identification of the excitation energy of 8Be* as the mass difference of ρ and π type colored bonds (cb): Eex(8Be*)=m(ρ,cb)-m(π,cb)= m(π,113)= 17 MeV in the approximation that X is created at rest. If one has m(ρ,cb)/m(π,cb)= m(ρ)/m(π) - this is not necessary - this gives m(ρ,cb)≈ 20.6 MeV and m(π,cb)≈ 3.5 MeV.
  3. This estimate is based on mass differences and says nothing about nuclear binding energy. If the color bonds carry positive energy, the binding energy should be localizable to the interaction of quarks at the ends of color bonds with nucleons. The model clearly assumes that the dynamics of color bonds separates from the dynamics of nuclei in the case of the anomaly.
  4. The assumption about direct coupling of X to quarks and therefore to nucleons does not makes sense in this framework. Hence protophoby does not hold true in TGD and this is due to the presence of long color bonds in nuclear strings. Also the spin 1 assignment would be wrong.
To conclude, this new nuclear physics is physics of the magnetic body of nucleus and involves hierarchy of Planck constants in an essential manner, and the proposed solution to the mysteriously puzzle decay rate π(113)→ γγ could turn out to provide a direct experimental proof for the hierarchy of Planck constants. The proposal for the explanation of the anomaly in charge radius of proton involves physics of the magnetic body of proton (see this). TGD inspired quantum biology is to high degree quantum physics of magnetic body. Maybe the physics of magnetic body differentiates to its own branch of physics someday. To conclude, this new nuclear physics is physics of the magnetic body of nucleus and involves hierarchy of Planck constants in an essential manner, and the proposed solution to the mysteriously puzzle decay rate π(113)→ γγ could turn out to provide a direct experimental proof for the hierarchy of Planck constants. It also suggests a new approach to the leptonic decays of hadrons based on dark or p-adically scaled down variants of weak interactions The proposal for the explanation of the anomaly in charge radius of proton involves physics of the magnetic body of proton (see this). TGD inspired quantum biology is to high degree quantum physics of magnetic body. Maybe the physics of magnetic body differentiates to its own branch of physics someday.

For details see the article X boson as evidence for nuclear string model or the chapter Nuclear string model.



Badly behaving photons and space-time as 4-surface

There was a very interesting popular article with title Light Behaving Badly: Strange Beams Reveal Hitch in Quantum Mechanics. The article told about a discovery made by a group of physicists at Trinity College Dublin in Ireland in the study of helical light-beams with conical geometry. These light beams are hollow and have the axis of helix as a symmetry axis. The surprising finding was that according to various experimental criteria one can say that photons have spin S=+/-/1/2 with respect to the rotations around the axis of the helix.

The first guess would be that this is due to the fact that rotational symmetry for the spiral conical beam is broken to axial rotational symmetry around the beam axis. This makes the situation 2-dimensional. In D=2 one can have braid statistics allowing fractional angular momentum for the rotations around a hole - now the hollow interior of the beam. One can however counter argue that photons with half odd integer braid spin should obey Fermi statistics. This would mean that only one photon with fixed spin is possible in the beam. Something seems to go wrong with the naive argument. It would seem that the exchange of photons does not seem to correspond to 2π rotation as a homotopy would be the topological manner to state the problem.

The authors of the article suggest that besides the ordinary conserved angular momentum one can identify also second conserved angular momentum like operator.

  1. The conserved angular momentum is obtained as the replacement J=L+S → Jγ= L+γ S .
  2. The eigenvalue equation for jγ for a superposition of right and left polarizations with S=+/- 1

    a1× eR exp(il1θ)+a2 × eL exp(il2θ) ,

    where li and also sz=+/- 1 are integers, makes sense for

    γ =(l1-l2)/2 ,

    and gives the eigenvalue

    jγ= (l1+l2)/2.

    Since l1 and l2 are integers by the continuity of the wave function at 2π (even this can be questioned in hollow conical geometry) (l1+l2)/2 and (l1-l2)/2 are either integers or half integers. For l1-l2=1 one has Jγ= J1/2= L+S/2, which is half odd integer. The stronger statement would be that 2-D Sγ =S/2 is half-odd integer.

There is an objection against this interpretation. The dependence of the angular momentum operator on the state of photon implied by γ= (l1-l2)/2 is a highly questionable feature. Operators should not depend on states but define them as their eigenstates. Could one understand the experimental findings in some different manner? Could the additional angular momentum operator allow some natural interpretation? If it really generates rotations, where does it act?

In TGD framework this question relates interestingly to the assumption that space-time is 4-surface in M4× CP2. Could X4 and M4 correspond to the two loci for the action of rotations? One can indeed have two kinds of photons. Photons can correspond to space-time sheets in M4× CP2 or they can correspond to space-time sheets topologically condensed to space-time surface X4⊂ M4× CP2. For the first option one would have ordinary quantization of angular momentum in M4. For the second option quantization in X4 angular momentum, which using the units of M4 angular momentum could correspond to half-integer or even more general quantization.

  1. For the first option (photons in M4) angular momentum J(M4)=L(M4)+S(M4) acts at point-like limit on a wave function of photon in M4. J(M4) acts as a generator of rotations in M4 should have the standard properties: in particular photon spin is S=+/-1.
  2. For topologically condensed photons at helix the angular momentum operator J(X4)=L(X4)+ S(X4) generates at point-like limit rotations in X4. If M4 coordinates - in particular angle coordinate φ around helical axis - are used for X4, the identifications

    J(X4)=k J(M4) , L(X4)=k L(M4) , S(X4)=kS(M4) .

    are possible.

  3. In the recent case X4 corresponds to effectively a helical conical path of photon beam, which is effectively 2-D system with axial U(1) symmetry. The space-time surface associated with the helical beam is analogous to a covering space of plane defined by Riemann surface for z1/n with origin excluded (hollowness of the spiral beam is essential since at z-axis various angles φ correspond to the same point and one would obtain discontinuity). It takes n full turns before one gets to the original point. This implies that L(X4)=k L(M4) can be fractional with unit hbar/n meaning k=1/n when the angle coordinate of M4 serves as angle coordinate of X4.
  4. For n=2 one has k=1/2 and 4π rotations in Minkowski space interpreted as shadows of rotations at X4 must give a phase equal to unity. This would allow half integer quantization for J(X4),L(X4) and S(X4) of photon in M4 units. S(X4) corresponds to a local rotation in tangent space of X4. The braid rotation defined by a path around the helical axis corresponds to a spin rotation and by k=1/2 to S(X4)=S(M4)/2= 1/2. Hence one has effectively S(M4) =+/- 1/2 for the two circular polarizations and thus γ=+/- 1/2 independently of li: in the above model γ=(l1-l2)/2 can have also other values. Now also other values of n besides n=2 are predicted.

    li can be both integer and half odd integer valued. One can reproduce the experimental findings for integer valued l1 and l2. One has j=l1+1/2=l2-1/2 from condition that superpositions of both right and left-handed spiral photons are possible. If j is half-odd integer, l1+l2=2j is odd integer. For instance, S(X4)=1/2 gives l1-l2=-1 consistent with integer/half-odd integer property of both l1 and l2. For j=1/2 one has l1+l2=1 and l1-l2=-1 giving (l1,l2)=(0,1).

  5. Is there something special in n=2? In TGD elementary particles have wormhole contacts connecting two space-time sheets as building bricks. If the sheets form a covering of M4 singular along plane M2 one has n=2 naturally.
One can worry about many-sheeted statistics. The intuitive view is that one just adds bosons/fermions at different sheets and each sheet corresponds to a discrete degree of freedom.
  1. Statistics is not changed to Fermi statistics if the exchange interpreted at X4 corresponds to n× 2π rotation. For n=2 a possible modification of the anti-commutation relations would be doubling of oscillator operators assigning ak(i), i=1,2 to the 2 sheets and formulating braid anti-commutativity as

    {ak(1),al(2)}=0 , {ak(1),al(2)}=0 , {ak(1),al(2)}=0 .

    [ak(i),al(i)]=0, [ak(i), al(i)]=0, [ak(i), al(i)]=δk,l .

    This would be consistent with Bose-Einstein statistics. For n-sheeted case the formula replacing pair (1,2) with any pair (i,j≠ i) applies. One would have two sets of mutually commuting (creation) operators and these sets would anti-commute and Bose-Einstein condensates seem to be possible.

  2. One can worry about the connection with the hierarchy of Planck constants heff=n× h, which is assigned with singular n-sheeted covering space. The 3-D surfaces defining ends of the covering at the boundaries of causal diamond (CD) would in this case co-incide. This might be the case now since the photon beam is assumed to be conical helix. Space-time surface would be analogous to n 3-D paths, which co-incide at their ends at past and future boundaries of CD.

    Does the scaling of Planck constant by n compensate for the fractionization so that the only effect would be doubled Bose-Einstein condensate. It woud seem that these condensates need not have same numbers of photons. The scaling of cyclotron energies by n is central in the application of heff=nh idea. It could be interpreted by saying that single boson state is replaced with n-boson state with the same cyclotron frequency but n-fold energy.

  3. In the fermionic case on obtain n additional degrees of freedom and ordinary single fermion state would be replaced with a set of states containing up to n fermions. This would lead to a kind of breakdown of fermion statistics possibly having interpretation in terms of braid statistics. And old question is whether one could understand quark color as heff/h=n=3 braid statistics for leptons. At the level of CP2 spinors em charge corresponds to sum of vectorial isospin and of anomalous color hypercharge which is for leptons n=3 multiple of that for quarks. This could be perhaps interpreted in terms of scaling in hypercharge degree of freedom due to 3-sheeted covering. This picture does not seem however to work.
To sum up, also M4 angular momentum and spin make sense and are integer valued but for the system identifiable as topological condensed photon plus helix rather than topological condensed photon at helix. Many-sheeted space-time can in principle rise to several angular momenta of this kind. Symmetry breaking go SO(2) subgroup is however involved. The general prediction is 1/n fractionization.

For details see the article Badly behaving photons and space-time as 4-surface or the chapter Criticality and dark matter.



Could the replication of mirror DNA teach something about chiral selection?

I received a link to a very interesting popular article from which I learned that short strands of mirror DNA and mirror RNA - known as aptamers - have been be produced commercially for decades - a total surprise to me. Aptamers bind to targets like proteins and block their activity and this ability can be utilized for medical purposes.

Now researchers at Tsinghua University of Beijing have been able to create a mirror variant of an enzyme - DNA polymeraze - catalyzing the transcription of mirror DNA to mirror RNA also replication of mirror DNA. What is needed are the DNA strand to be replicated or transcribed, the mirror DNA nucleotides, and short primer strand since the DNA polymeraze starts to work only if the primer is present. This is like recalling a poem only after hearing the first few words.

The commonly used DNA polymerase containing about 600 amino-adics is too long to be built up as a right-handed version and researchers used a much shorter version: African swine fever virus having only 174 amino-acids. The replication turned out to be very slow. A primer of 12 nucleotides was extended to a strand of 18 nucleotides in about 4 hours: 3/2 nucleotides per hour. The extension to a strand of 56 nucleotides took 36 hours making 44/36= 11/9 nucleotides per hour. DNA and its mirror image co-existed peacefully in a solution. One explanation for the absence of mirror life is that the replication and transcription of mirror form was so slow that it lost the fight for survival. Second explanation is that the emergence of mirror forms of DNA polymerase and other enzymes was less probable.

Can one learn anything about this?

  1. Chiral selection is one of the deep mysteries of biology. Amino-acids are left-handed and DNA and RNA double strands form a right-handed screw. One can assign handedness with individual DNA nucleotides and with DNA double strand but web sources speak only about the chirality of double strand. If the chirality of the DNA nucleotides were not fixed, it would have been very probably discovered long time ago as an additional bit doubling the number of DNA letters.
  2. What could be the origin of the chirality selection? Second helicity could have been loser in the fight for survival and the above finding supports this: fast ones eat the slow ones like in market economy. There must be however a breaking of mirror symmetry. Weak interactions break of mirror symmetry but the breaking is extremely small because the weak bosons mediating weak interaction are so massive that the length scale in which the breaking of mirror symmetry matters is of order 1/100 times proton size. This breaking is quite too small to explain chiral selection occurring in nano-scales: there is discrepancy of 8 orders of magnitude. The proposal has been that the breaking of mirror symmetry has been spontaneous and induced by a very small seed. As far as I know, no convincing candidate for the seed has been identified.
According to TGD inspired model chiral selection would be induced from that in dark matter sector identified in terms of phases of ordinary matter with non-standard value of Planck constant heff/h= n. In living matter dark matter would reside at magnetic flux tubes and control ordinary matter. TGD predicts standard model couplings, in particular weak parity breaking. For heff/h= n the scale below which weak bosons behave as massless particles implying large parity breaking is scaled up by n. Large parity breaking for dark matter becomes possible in even biological length scales for large enough heff.

The crucial finding is that the states of dark proton regarded as part of dark nuclear string can be mapped naturally to DNA, RNA, tRNA, and amino-acid molecules and that vertebrate genetic code can be reproduced naturally. This suggests that genetic code is realized at the level of dark nuclear physics and induces its chemical variant. More generally, biochemistry would be kind of shadow of dark matter physics. A model for dark proton sequences and their helical pairing is proposed and estimates for the parity conserving and breaking parts of Z0 interaction potential are deduced.

For details see the article Could the replication of mirror DNA teach something about chiral selection? or the chapter Criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".



One step further in the understanding the origins of life

I learned about very interesting discovery related to the problem of understanding how the basic building bricks of life might have emerged. RNA (DNA) has nucleotides A,G,C,U (T) as basic building bricks.

The first deep question is how the nucleotides A,G,C,U, and T emerged.

  1. There are two types of nucleotides. Pyrimidines C and T/U ) have single carbon 6-cycle. Purines A and G in turn have single 6-single and 5-cycle fused attached together along one side. Purines are clearly more complex than pyrimidines.
  2. U.K. chemist John Sutherland demonstrated a plausible sequence of steps leading to the emergence of pyrimidines. Purines turned out to be more problematic. Leslie Orgel and colleagues suggested a possible pathway but it produces purines in too tiny amounts.
Now a group led by Thomas Carell in Ludwig Maximilian University have found a more plausible mechanism.
  1. Carell and colleagues studied the interaction of biomolecule formamido-pyrimidine (FaPy) with DNA and found that it also reacts to produce purines. Could FaPys have served as predecessors of purines? (For formamide see this and for the class of chemical compounds known as amines see this).
  2. The first step would have been a copious production of amino-pyrimidines containing several chemical groups known as amines. The problem is that the are so many amines and they normally react indiscriminantly to produce many different compounds. One wants mostly purines so that only one critical amine is wanted.
  3. When Carell and his team added some acid to the solution to decrease its pH, a miracle happened. The extra protons from acid attached to the amines of the amino-pyrimidine and made them non-reactive. There was however one exception: just the amine giving rise to purine in its reactions! The reactive amine also readily bonded with formic acid or formamide. Hence it seems that one big problem has been solved.
The second challenge is to understand how the building bricks of RNA and DNA combined to form longer polymers and began to replicate.
  1. One prevailing vision is that so called RNA world preceded the recent biology dominated by DNA. The goal has been to achieve generation of RNA sequence in laboratory. Unlike DNA RNA sequences are not stable and long sequences are difficult to generate. DNA in turn replicates only inside cell and the presence of what is known as ordered water seems to be essential for this.
  2. This step might involve new physics and chemistry and I have considered the possibility that the new physics involves magnetic bodies and dark proton sequences as a representation of the genetic code at the level of dark nuclear physics. There is no need to add that the fact that dark proton states provide representations for RNA, DNA, tRNA, and amino-acids (see this) looks like a miracle and I find still difficult to believe that it is true and for genetic code. Also the representation of vertebrate code emerges in terms of correspondences of dark proton states.

    This suggests that the replication of DNA and takes place at the level of dark proton sequencies - dark nuclear strings - serving as a dynamical template for the biological replication. Also transcription and translation would be induced by dark process. Actually all biochemical processes could have as template the dynamics of molecular magnetic bodies and biochemistry would be kind of shadow of deeper dynamics.

  3. There is actually support for dark proton sequences. Quite recently I learned about the article of Leif Holmlid and Bernhard Kotzias (see this) about the superdense phase of hydrogen. In TGD superdense phase has interpretation as dark proton sequences at magnetic flux tubes with the Compton length of dark proton coded by heff/h≈ 211 to electron's Compton length (see this). Remarkably, it is reported that the superdense hydrogen is super-conductor and super-fluid at room temperatures and even above: this is just what TGD predicts.

    The dark protons in TGD inspired quantum biology (see this) should have much longer Compton length of order of the distance between nucleotides in DNA sequences in order to serve as templates for chemical DNA. This gives a dark Compton length of order ≈ 3.3 Angstroms from the fact that there are 10 codons per 10 nm. This gives heff/h≈ 218 .

One can return back to the first step in the genesis of DNA and RNA. The addition of protons to the solution used to model prebiotic environment to make it slightly acidic was the key step. Why?
  1. Here cold fusion might help. Cold fusion is claimed to take place in electrolysis involving ionization and charge separation. The electric fields used in electrolysis induce ionization and thus charge separation. For me it has however remained a mystery how electric fields, which are extremely tiny using the typical strength of molecular electric field as standard are able to induce a charge separation. Of course, every chemist worth of his salt regards this as totally trivial problem. I am however foolish enough to consider the possibility that some new physics might be involved.
  2. The mechanism causing charge separation could be analogous to or that discovered by Pollack as he irradiated water bounded by a gel phase (see this): in the recent case the electric field would take the role of irradiation as a feeder of energy. Negatively charged exclusion zones (EZs) were formed and 1/4 of protons went somewhere.

    The TGD proposal (see this) is that part of protons went to magnetic flux tubes and formed dark proton sequences identifiable as dark nuclear strings. The scaled down nuclear binding energy favours the formation of dark nuclear strings perhaps proceeding as analog of nuclear chain reaction. This picture allows to ask whether dark proton sequences giving rise to a fundamental representation of the genetic code could have been present already in water (see this).

  3. How DNA/RNA could have then formed? Could the protons making the solution acidic be dark so that the proton attaching to the amine would be dark? Could it be that for all amines except the right one the proton transforms to ordinary proton and destroys the chemical reactivity. Could the attached dark proton remain dark just for the correct amine so that the amine would remain reactive and give rise to purine in further reactions? Could A,G,C,T and U be those purines and pyrimidines - or even more general biomolecules - for which the attachment to dark proton does not transform it to ordinary proton and in this manner affect dramatically the chemical properties of the molecule? What is the condition for the preservation of the darkness of the proton?

See the chapter Quantum criticality and dark matter or the article One step further in the understanding the origins of life.



Phase transition temperatures of 405-725 K in superfluid ultra-dense hydrogen clusters on metal surfaces

I received from Jouni a very helpful comment to an earlier blog posting telling about the work of Prof. Leif Holmlid related to cold fusion and comparing Holmlid's model with TGD inspired model (see also the article). This helped to find a new article of Holmlid and Kotzias with title "Phase transition temperatures of 405-725 K in superfluid ultra-dense hydrogen clusters on metal surfaces" published towards the end of April and providing very valuable information about the superdense phase of hydrogen/deuterium that he postulates to be crucial for cold fusion (see this ).

The postulated supra dense phase would have properties surprisingly similar to the phase postulated to be formed by dark magnetic flux tubes carrying dark proton sequences generating dark beta stable nuclei by dark weak interactions. My original intuition was that this phase is not superdense but has a density nearer to ordinary condensed matter density. The density however depends on the value of Planck constant and with Planck constant of order mp/me ≈ .94 ×211=1880 times the ordinary one one obtains the density reported by Holmlid so that the models become surprisingly similar. The earlier representation were mostly based on the assumption that the distance between dark protons is in Angstrom range rather than picometer range and thus by a factor 32 longer. The modification of the model is straightforward: one prediction is that radiation with energy scale of 1-10 keV should accompany the formation of dark nuclei.

In fact, there are also similarities about which I did not know of!

  1. The article tells that the structures formed from hydrogen/deuterium atoms are linear string like structures: this was completely new to me. The support comes from the detection of what is interpreted as decay products of these structures resulting in fragmentation in the central regions of these structures. What is detected is the time-of-flight distribution for the fragments. In TGD inspired model magnetic flux tubes carrying dark proton/D sequences giving rise to dark nuclei are also linear structures.
  2. The reported superfluid (superconductor) property and the detection of Meissner effect for the structures were also big news to me and conforms with TGD picture allowing dark supraphases at flux tubes. Superfluid/superconductor property requires that protons form Cooper pairs. The proposal of Holmlid and Kotzias that Cooper pairs are pairs of protons orthogonal to the string like structure corresponds to the model of high Tc superconductivity in TGD inspired model of quantum biology assuming a pair of flux tubes with tubes containing the members of the Cooper pairs. High Tc would be due to the non-standard value of heff=n× h. This finding would be a rather direct experimental proof for the basic assumption of TGD inspired quantum biology (see this).
  3. In TGD model it is assumed that the density of protons at dark magnetic flux tube is determined by the value of heff. Also ordinary nuclei are identified as nuclear strings and the density of protons would be the linear density protons for ordinary nuclear strings scaled down by the inverse of heff - that is by factor h/heff=1/n.

    If one assumes that single proton in ordinary nuclear string occupies length given by proton Compton length equal to (me/mp) time proton Compton length and if the volume occupied by dark proton is 2.3 pm very nearly equal to electron Compton length 2.4 pm in the ultra-dense phase of Holmlid, the value of n must be rather near n≈ mp/me ≈ 211≈ 2000 as the ratio of Compton lengths of electron and proton. The physical interpretation would be that the p-adic length scale of proton is scaled up to essentially that of electron which from p-adic mass calculations corresponds to p-adic prime M127=2127-1 (see this). The ultra dense phase of Holmlid would correspond to dark nuclei with heff/h≈ 211.

    My earlier intuition was that the density is of the order of the ordinary condensed matter density. If the nuclear binding energy scales as 1/heff (scaling like Coulomb interaction energy) as assumed in the TGD model, the nuclear binding energy per nucleon would scale down from about 7 MeV to about 3.5 keV for k=127. This energy scale is same as that for Coulomb interaction energy for distance of 2.3 pm in Holmlid's model (about 5 keV). It must be emphasized that larger values of heff are possible in TGD framework and indeed suggested by TGD inspired quantum biology. The original too restricted hypothesis was that the values of n comes as powers of 211.

  4. In TGD based model scaled down dark nuclear binding energy would (more than) compensate for the Coulomb repulsion and laser pulse would induce the phase transition increasing the density of protons and increasing also Planck constant making protons dark and leading from the state of free protons to that consisting of dark purely protonic nuclei in turn transforming by dark weak interactions to beta stable nuclei and finally to ordinary nuclei liberating essentially ordinary nuclear binding energy. In TGD based model the phase transition would give rise to charge separation and the transition would be highly analogous to that occurring in Pollack's experiments.
It seems that the model of Holmlid and TGD based model are very similar and Holmlid's experimental findings support the vision about hierarchy of dark matter as phases of the ordinary matter labelled by the value of heff/h=n. There are also other anomalies that might find explanation in terms of dark nuclei with n≈ 211. The X rays from Sun have been found to induce a yearly variation of nuclear decay rates correlating with the distance of Earth from Sun (see for instance this, this and this).
  1. One possible TGD based explanation relies on the nuclear string model). Nucleons are assumed to be connected by color flux tubes, which are usually neutral but can be also charged. For instance, proton plus negative charged flux tube connecting it to the neighboring nucleon behaves effectively as neutron. This predicts exotic nuclei with the same chemical properties as ordinary nuclei but with possibly different statistics. X rays from Sun could induce transitions between ordinary and exotic nuclei affecting the measured nuclear reaction rates which are averages of all states of nuclei. A scaled down variant of gamma ray spectroscopy of ordinary nuclei would provide an experimental proof of TGD based model.
  2. The fact that the energy scale is around 3 keV suggests that X rays could generate transitions of dark nuclei. If so, the transformations of dark nuclei to ordinary ones would affect the measured nuclear transition rates. There are also other anomalies (for instance those reported by Rolfs et al, for referencs see the article), which might find explanation in terms of presence of dark variants of nuclei ordinary nuclei.
For background and references see the chapter Cold fusion again of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy" or article with the same title.



NMP and adelic physics

In given p-adic sector the entanglement entropy (EE) is defined by replacing the logarithms of probabilities in Shannon formula by the logarithms of their p-adic norms. The resulting entropy satisfies the same axioms as ordinary entropy but makes sense only for probabilities, which must be rational valued or in an algebraic extension of rationals. The algebraic extensions corresponds to the evolutionary level of system and the algebraic complexity of the extension serves as a measure for the evolutionary level. p-Adically also extensions determined by roots of e can be considered. What is so remarkable is that the number theoretic entropy can be negative.

A simple example allows to get an idea about what is involved. If the entanglement probabilities are rational numbers Pi=Mi/N, ∑i Mi=N, then the primes appearing as factors of N correspond to a negative contribution to the number theoretic entanglement entropy and thus to information. The factors of Mi correspond to negative contributions. For maximal entanglement with Pi=1/N in this case the EE is negative. The interpretation is that the entangled state represents quantally concept or a rule as superposition of its instances defined by the state pairs in the superposition. Identity matrix means that one can choose the state basis in arbitrary manner and the interpretation could be in terms of "enlightened" state of consciousness characterized by "absence of distinctions". In general case the basis is unique.

Metabolism is a central concept in biology and neuroscience. Usually metabolism is understood as transfer of ordered energy and various chemical metabolites to the system. In TGD metabolism could be basically just a transfer of NE from nutrients to the organism. Living systems would be fighting for NE to stay alive (NMP is merciless!) and stealing of NE would be the fundamental crime.

TGD has been plagued by a longstanding interpretational problem: can one apply the notion of number theoretic entropy in the real context or not. If this is possible at all, under what conditions this is the case? How does one know that the entanglement probabilities are not transcendental as they would be in generic case? There is also a second problem: p-adic Hilbert space is not a well-defined notion since the sum of p-adic probabilities defined as moduli squared for the coefficients of the superposition of orthonormal states can vanish and one obtains zero norm states.

These problems disappear if the reduction occurs in the intersection of reality and p-adicities since here Hilbert spaces have some algebraic number field as coefficient field. By SH the 2-D states states provide all information needed to construct quantum physics. In particular, quantum measurement theory.

  1. The Hilbert spaces defining state spaces has as their coefficient field always some algebraic extension of rationals so that number theoretic entropies make sense for all primes. p-Adic numbers as coefficients cannot be used and reals are not allowed. Since the same Hilbert space is shared by real and p-adic sectors, a given state function reduction in the intersection has real and p-adic space-time shadows.
  2. State function reductions at these 2- surfaces at the ends of causal diamond (CD) take place in the intersection of realities and p-adicities if the parameters characterizing these surfaces are in the algebraic extension considered. It is however not absolutely necessary to assume that the coordinates of WCW belong to the algebraic extension although this looks very natural.
  3. NMP applies to the total EE. It can quite well happen that NMP for the sum of real and p-adic entanglement entropies does not allow ordinary state function reduction to take place since p-adic negative entropies for some primes would become zero and net negentropy would be lost. There is competition between real and p-adic sectors and p-adic sectors can win! Mind has causal power: it can stabilize quantum states against state function reduction and tame the randomness of quantum physics in absence of cognition! Can one interpret this causal power of cognition in terms of intentionality? If so, p-adic physics would be also physics of intentionality as originally assumed.
A fascinating question is whether the p-adic view about cognition could allow to understand the mysterious looking ability of idiot savants (not only of them but also of some greatest mathematicians) to decompose large integers to prime factors. One possible mechanism is that the integer N represented concretely is mapped to a maximally entangled state with entanglement probabilities Pi=1/N, which means NE for the prime factors of Pi or N. The factorization would be experienced directly.

One can also ask, whether the other mathematical feats performed by idiot savants could be understood in terms of their ability to directly experience - "see" - the prime composition (adelic decomposition) of integer or even rational. This could for instance allow to "see" if integer is - say 3rd - power of some smaller integer: all prime exponents in it would be multiples of 3. If the person is able to generate an NE for which probabilities Pi=Mi/N are apart from normalization equal to given integers Mi, ∑ Mi=N, then they could be able to "see" the prime compositions for Mi and N. For instance, they could "see" whether both Mi and N are 3rd powers of some integer and just by going through trials find the integers satisfying this condition.

For details see the chapter Negentropy Maximization Principle or the article TGD Inspired Comments about Integrated Information Theory of Consciousness.



Is cold fusion becoming a new technology?

The progress in cold fusion research has been really fast during last years and the most recent news might well mean the final breakthrough concerning practical applications which would include not only wasteless energy production but maybe also production of elements such as metals. The popular article titled Cold Fusion Real, Revolutionary, and Ready Says Leading Scandinavian Newspaper ) tells about the work of Prof. Leif Holmlid and his student Sinder-Zeiner-Gundersen. For more details about the work of Holmlid et als ee this, this, this, and this.

The latter revealed the details of an operating cold fusion reactor in Norway reported to generate 20 times more energy than required to activate it. The estimate of Holmlid is that Norway would need 100 kg of deuterium per year to satisfy its energy needs (this would suggest that the amount of fusion products is rather small to be practical except in situations, where the amounts needed are really small). The amusing co-incidence is that I constructed towards the end of the last year a detailed TGD based model of cold fusion ( see this) and the findings of Leif Holmlid served as an important guideline although the proposed mechanism is different.

Histories are cruel, and the cruel history of cold fusion begins in 1989, when Pons and Fleichmann reported anomalous heat production involving palladium target and electrolysis in heavy water (deuterium replacing hydrogen). The reaction is impossible in the world governed by text book physics since Coulomb barrier makes it impossible for positively charged nuclei to get close enough. If ordinary fusion is in question, reaction products should involve gamma rays and neutrons and these have not been observed.

The community preferred text books over observations and labelled Pons and Fleichman and their followers as crackpots and it became impossible to publish anything in so called respected journals. The pioneers have however continued to work with cold fusion and for few years ago American Chemical Society had to admit that there might be something in it and cold fusion researchers got a status of respectable researcher. There have been several proposals for working reactors such as Rossi's E-Cat and NASA is performing research in cold fusion. In countries like Finland cold fusion is still a cursed subject and will probably remain so until cold fusion becomes the main energy source in heating of also physics department.

The model of Holmlid for cold fusion

Leif Holmlid is a professor emeritus in chemistry at the University of Gothemburg. He has quite recently published a work on Rydberg matter in the prestigious journals of APS and is now invited to tell about his work on cold fusion to a meeting of American Physical Society.

  1. Holmlid regards Rydberg matter) as a probable precursor of cold fusion. Rydberg atoms have some electrons at very high orbitals with large radius. Therefore the nuclei plus core electrons look for them like a point nucleus, which charge equal to nuclear charge plus that of core electrons. Rydberg matter forms layer-like structures with hexagonal lattice structure.
  2. Cold fusion would involve the formation of what Holmlid calls ultra-dense deuterium having Rydberg matter as precursor. If I have understood correctly, the laser pulse hitting Rydberg matter would induce the formation of the ultra-dense phase of deuterium by contracting it strongly in the direction of the pulse. The ultra-dense phase would then suffer Coulomb explosion. The compression seems to be assumed to happen in all directions. To me the natural assumption would be that it occurs only in the direction of laser pulse defining the direction of force acting on the system.
  3. The ultra-dense deuterium would have density about .13× 106 kg/m3, which is 1.3× 103 times that of ordinary water. The nuclei would be so close to each other that only a small perturbation would make possible to overcome the Coulomb wall and cold fusion can proceed. Critical system would be in question. It would be hard to predict the outcome of individual experiment. This would explain why the cold fusion experiments have been so hard to replicate. The existence of ultra-dense deuterium has not been proven but cold fusion seems takes place.

    Rydberg matter, which should not be confused with the ultra-dense phase would be the precursor of the process. I am not sure whether Rydberg matter exists before the process or whether it would be created by the laser pulse. Cold fusion would occur in the observed microscopic fracture zones of solid metal substances.

Issues not so well-understood

The process has some poorly understood aspects.

  1. Muons as also of mesons like pion and kaon are detected in the outgoing beam generated by the laser pulse. Muons with mass about 106 MeV could be decay products of pions with mass of 140 MeV and kaons but how these particles with masses much larger than scale of nuclear binding energy per nucleon of about 7-8 MeV for ligher nuclei could be produced even if low energy nuclear reactions are involved? Pions appear as mediators of strong interaction in the old-fashioned model of nuclear interactions but the production on mass shell pions seems very implausible in low energy nuclear collisions.
  2. What is even stranger that muons produced even when laser pulse is not used to initiate the reaction. Holmlid suggests that there are two reaction pathways for cold fusion: with and without the laser pulse. This forces to ask whether the creation of Rydberg matter or something analogous to it is alone enough to induce cold fusion and whether the laser beam actually provides the energy needed for this so that ultra-dense phase of deuterium would not be needed at all. Coulomb wall problem would be solve in some other manner.
  3. The amount of gamma radiation and neurons is small so that ordinary cold fusion does not seem to be in question as would be implied by the proposed mechanism of overcoming the Coulomb wall. Muon production would suggest muon catalyzed fusion as a mechanism of cold fusion but also this mechanism should produce gammas and neutrons.
TGD inspired model of cold fusion

It seems that Holmlid's experiments realize cold fusion and that cold fusion might be soon a well-established technology. A real theoretical understanding is however missing. New physics is definitely required and TGD could provide it.

  1. TGD based model of cold fusion relies on TGD based view about dark matter. Dark matter would correspond to phases of ordinary matter with non-standard value of Planck constant heff=n× h implying that the Compton sizes of elementary particles and atomic nuclei are scaled up by n and can be rather large - of atomic size or even larger.

    Also weak interactions can become dark: this means that weak boson Compton lengths are scaled up so that they are effectively massless below Compton length and weak interactions become as strong as electromagnetic interactions. If this happens, then weak interactions can lead to rapid beta decay of dark protons transforming them to neutrons (or effectively neutrons as it turns out). For instance, one can imagine that proton or deuteron approaching nucleus transforms rapidly to neutral state by exchange of dark W bosons and can overcome the Coulomb wall in this manner: this was my original proposal for the mechanism of cold fusion.

  2. The model assumes that electrolysis leads to a formation of so called fourth phase of water discovered by Pollack. For instance, irradiation by infrared light can induce the formation of negatively charged exclusion zones (EZs) of Pollack. Maybe also the laser beam used in the experiments of Holmlid could do this so that compression to ultra-dense phase would not be needed. The fourth phase of water forms layered structures consisting of 2-D hexagonal lattices with stoichiometry H1.5O and carrying therefore a strong electric charge. Also Rydberg matter forms this kind of lattices, which suggests a connection with the experiments of Holmlid.

    Protons must go somewhere from the EZ and the interpretation is that one proton per hydrogen bonded pair of water molecules goes to a flux tube of the magnetic body of the system as dark proton with non-standard value of Planck constant heff=n× h and forms sequence of dark protons forming dark nucleus. If the binding energy of dark nucleus scales like 1/heff (1/size) the binding energy of dark nucleus is much smaller than that for ordinary nucleus. The liberated dark nuclear binding energy in the formation would generate further EZs and one would have a kind of chain reaction.

    In fact, this picture leads to the proposal that even old and boring ordinary electrolysis involves new physics. Hard to confess, but I have had grave difficulties in understanding why ionization should occur at all in electrolysis! The external electric field between the electrodes is extremely weak in atomic scales and it is difficult to understand how it induce ionization needed to load the electric battery!

  3. The dark proton sequences need not be stable - the TGD counterpart for the Coulomb barrier problem. More than half of the nucleons of ordinary nuclei are neutrons and similar situation is the first expectation now. Dark weak boson (W) emission could lead to dark beta decay transforming proton to neutron or what looks like neutron (what this cryptic statement means would requires explanation about nuclear string model). This would stabilize the dark nuclei.

    An important prediction is that dark nuclei are beta stable since dark weak interactions are so fast. This is one of the predictions of the theory. Second important prediction is that gamma rays and neutrons are not produced at this stage. The analogs of gamma rays would have energies of order dark nuclear binding energy, which is ordinary nuclear energy scale scaled down by 1/n. Radiation at lower energies would be produced. I have a vague memory that X rays in keV range have been detected in cold fusion experiments. This would correspond to atomic size scale for dark nuclei.

  4. How the ordinary nuclei are then produced? The dark nuclei could return back to negatively charged EZ (Coulomb attraction) or leave the system along magnetic flux tubes and collide with some target and transform to ordinary nuclei by phase transition reducing the value of heff. It would seem that metallic targets such as Pd are favorites in this respect. A possible reason is that metallic target can have negative surface charge densities (electron charge density waves are believed by some workers in the field to be important for cold fusion) and attract the positively charged dark nuclei at magnetic flux tubes.

    Essentially all of the nuclear binding energy would be liberated - not only the difference of binding energies for the reacting nuclei as in hot fusion. At this stage also ultra-dense regions of deuterium might be created since huge binding energy is liberated and could induce also ordinary fusion reactions. This process would create fractures in the metal target.

    This would also explain the claimed strange effects of so called Brown's gas generated in electrolysis on metals: it is claimed that Brown's gas (one piece of physics, which serious academic physicists enjoying monthly salary refuse to consider seriously) can melt metals although its temperature is not much more than 100 degrees Celsius.

  5. This model would predict the formation of beta stable nuclei as dark proton sequences transform to ordinary nuclei. This process would be analogous to that believed to occur in supernova explosions and used to explain the synthesis of nuclei heavier than iron. This process could also replace the hypothesis about super-nova nucleosynthesis: indeed, SN1987A did not provide support for this hypothesis.

    The reactor of Rossi is reported to produce heavier isotopes of Ni and of Copper. This would strongly suggest that protons also fuse with Ni nuclei. Also heavier nuclei could enter to the magnetic flux tubes and form dark nuclei with dark protons transformed partially to neutral nucleons. Also the transformation of dark nuclei to ordinary nuclei could generate so high densities that ordinary nuclear reactions become possible.

  6. What about the mysterious production of pions and mesons producing in turn muons?
    1. Could the transformation of nuclei to ordinary nuclei generate so high a local temperature that hadron physics would provide an appropriate description of the situation. Pion mass corresponds to 140 MeV energy and huge temperature about .14 GeV. This is much higher than solar temperature and looks totally implausible.
    2. The total binding energy of nucleus with 20 nucleons as single pion would generate energy of this order of magnitude. Dark nuclei are quantum coherent structures: could this make possible this kind of "holistic" process in the transformation to ordinary nucleus. This might be part of the story.
    3. Could the transformation to ordinary nucleus involve the emission of dark W boson with mass about 80 GeV decaying to dark quark pairs binding to dark mesons transforming eventually to ordinary mesons? Could dark W boson emission occur quantum coherently so that the amplitude would be sum over the emission amplitudes, and one would have an amplification of the decay rate so that it would be proportional to the square of dark nuclear charge? The effective masslessness below atomic scale would make the rate for this process high. The emission would lead directly to the final state nucleus by emission of on mass shell mesons.

For background see the chapter Cold fusion again or article with the same title.



Tensor Networks and S-matrices

The concrete construction of scattering amplitudes has been the toughest challenge of TGD and the slow progress has occurred by identification of general principles with many side tracks. One of the key problems has been unitarity. The intuitive expectation is that unitarity should reduce to a local notion somewhat like classical field equations reduce the time evolution to a local variational principle. The presence of propagators have been however the the obstacle for locally realized unitarity in which each vertex would correspond to unitary map in some sense.

TGD suggests two approaches to the construction of S-matrix.

  1. The first approach is generalization of twistor program (this). What is new is that one does not sum over diagrams but there is a large number of equivalent diagrams giving the same outcome. The complexity of the scattering amplitude is characterized by the minimal diagram. Diagrams correspond to space-time surfaces so that several space-time surfaces give rise to the same scattering amplitude. This would correspond to the fact that the dynamics breaks classical determinism. Also quantum criticality is expected to be accompanied by quantum critical fluctuations breaking classical determinism. The strong form of holography would not be unique: there would be several space-time surfaces assignable as preferred extremals to given string world sheets and partonic 2-surfaces defining "space-time genes".
  2. Second approach relies on the number theoretic vision and interprets scattering amplitudes as representations for computations with each 3-vertex identifiable as a basic algebraic operation (this). There is an infinite number of equivalent computations connecting the set of initial algebraic objects to the set of final algebraic objects. There is a huge symmetry involved: one can eliminate all loops moving the end of line so that it transforms to a vacuum tadpole and can be snipped away. A braided tree diagram is left with braiding meaning that the fermion lines inside the line defined by light-like orbit are braided. This kind of braiding can occur also for space-like fermion lines inside magnetic flux tubes and defining correlate for entanglement. Braiding is the TGD counterpart for the problematic non-planarity in twistor approach.
Third approach involving local unitary as an additional key element is suggested by tensor networks relying on the notion of perfect entanglement discussed by Preskill et al (see this and this). A detailed representation can be found in the article of Preskill et al ).
  1. Tensor networks provide an elegant representation of holography mapping interior states isometrically (in Hilbert space sense) to boundary states or vice versa for selected subsets of states defining the code subspace for holographic quantum error correcting code. Again the tensor net is highly non-unique but there is some minimal tensor net characterizing the complexity of the entangled boundary state.
  2. Tensor networks have two key properties, which might be abstracted and applied to the construction of S-matrix in zero energy ontology (ZEO): perfect tensors define isometry for any subspace defined by the index subset of perfect tensor to its complement and the non-unique graph representing the network. As far as the construction of Hilbert space isometry between local interior states and highly non-local entangled boundary states is considered, these properties are enough.
One cannot avoid the idea that these three constructions are different aspects of one and same construction and that tensor net construction with perfect tensors representing vertices could provide and additional strong constraint to the long sought for explicit recipe for the construction of scattering amplitudes. How tensor networks could the allow to generalize the notion of unitary S-matrix in TGD framework?

Objections

It is certainly clear from the beginning that the possibly existing description of S-matrix in terms of tensor networks cannot correspond to the perturbative QFT description in terms of Feynman diagrams.

  1. Tensor network description relates interior and boundary degrees in holography by a isometry. Now however unitary matrix has quite different role. It could correspond to U-matrix relating zero energy states to each other or to the S-matrix relating to each other the states at boundary of CD and at the shifted boundary obtained by scaling. These scalings shifting the second boundary of CD and increasing the distance between the tips of CD define the analog of unitary time evolution in ZEO. The U-matrix for transitions associated with the state function reductions at fixed boundary of CD effectively reduces to S-matrix since the other boundary of CD is not affected.

    The only manner one could see this as holography type description would be in terms of ZEO in which zero energy states are at boundaries of CD and U-matrix is a representation for them in terms of holography involving the interior states representing scattering diagram in generalized sense.

  2. The appearance of small gauge coupling constant tells that the entanglement between "states" in state spaces whose coordinates formally correspond to quantum fields is weak and just opposite to that defined by a perfect tensor. Quite generally, coupling constant might be the fatal aspect of the vertices preventing the formulation in terms of perfect entanglement.

    One should understand how coupling constant emerges from this kind of description - or disappears from standard QFT description. One can think of including the coupling constant to the definition of gauge potentails: in TGD framework this is indeed true for induced gauge fields. There is no sensical manner to bring in the classical coupling constants in the classical framework and the inverse of Kähler coupling strength appears only as multiplier of the Kähler action analogous to critical temperature.

    More concretely, there are WCW spin degrees of freedom (fermionic degrees of freedom) and WCW orbital degrees of freedom involving functional integral over WCW. Fermionic contribution would not involve coupling constants whereas the functional integral over WCW involving exponential of vacuum functional could give rise to the coupling constants assignable to the vertices in the minimal tree diagram.

  3. The decomposition S= 1+iT of unitary S-matrix giving unitarity as the condition -i(T-T) +TT=0 reflects the perturbative thinking. If one has only isometry instead of unitary transformation, this decomposition becomes problematic since T and T whose some appears in the formula act in different spaces. One should have the generalization of Id as a "trivial" isometry. Alternatively, one should be able to extend the state space Hin by adding a tensor factor mapped trivially in isometry.
  4. There are 3- and 4-vertices rather than only -say, 3-vertices as in tensor networks. For non-Abelian Chern-Simons term for simple Lie group one would have besides kinetic term only 3-vertex Tr(A∧ A ∧ A) defining the analog of perfect tensor entanglement when interpreted as co-product involving 3-D permutation symbol and structure constants of Lie algebra. Note also that for twistor Grassmannian approach the fundamental vertices are 3-vertices. It must be however emphasized that QFT description emerges from TGD only at the limit when one identifies gauge potentials as sums of induced gauge potentials assignable to the space-time sheets, which are replaced with single piece of Minkowski space.
  5. Tensor network description does not contain propagators since the contractions are between perfect tensors. It is to make sense propagators must be eliminated. The twistorial factorization of massless fermion propagator suggest that this might be possible by absorbing the twistors to the vertices.
These reasons make it clear that the proposed idea is just a speculative question. Perhaps the best strategy is to look this crazy idea from different view points: the overly optimistic view developing big picture and the approach trying to debunk the idea.

The overly optimistic vision

With these prerequisites one can follow the optimistic strategy and ask how tensor networks could allow to generalize the notion of unitary S-matrix in TGD framework.

  1. Tensor networks suggests the replacement of unitary correspondence with the more general notion of Hilbert space isometry. This generalization is very natural in TGD since one must allow phase transitions increasing the state space and it is quite possible that S-matrix represents only isometry: this would mean that SS=Idin holds true but SS=Idout does not even make sense. This conforms with the idea that state function reduction sequences at fixed boundary of causal diamonds defining conscious entities give rise evolution implying that the size of the state space increases gradually as the system becomes more complex. Note that this gives rise to irreversibility understandandable in terms of NMP (this). It might be even impossible to formally restore unitary by introducing formal additional tensor factor to the space of incoming states if the isometric map of the incoming state space to outgoing state space is inclusion of hyperfinite factors.
  2. If the huge generalization of the duality of old fashioned string models makes sense, the minimal diagram represesenting scattering is expected to be a tree diagram with braiding and should allow a representation as a tensor network. The generalization of the tensor network concept to include braiding is trivial in principle: assign to the legs connecting the nodes defined by perfect tensors unitary matrices representing the braiding - here topological QFT allows realization of the unitary matrix. Besides fermionic degrees of freedom having interpretation as spin degrees of freedom at the level of "World of Classical Worlds" (WCW) there are also WCW orbital degrees of freedom. These two degrees of freedom factorize in the generalized unitarity conditions and the description seems much simpler in WCW orbital degrees of freedom than in WCW spin degrees of freedom.
  3. Concerning the concrete construction there are two levels involved, which are analogous to descriptions in terms of boundary and interior degrees of freedom in holography. The level of fundamental fermions assignable to string world sheets and their boundaries and the level of physical particles with particles assigned to sets of partonic 2-surface connected by magnetic flux tubes and associated fermionic strings. One could also see the ends of causal diamonds as analogous to boundary degrees of freedom and the space-time surface as interior degrees of freedom.
The description at the level of fundamental fermions corresponds to conformal field theory at string world sheets.
  1. The construction of the analogs of boundary states reduces to the construction of N-point functions for fundamental fermions assignable to the boundaries of string world sheets. These boundaries reside at 3-surfaces at the space-like space-time ends at CDs and at light-like 3-surfaces at which the signature of the induced space-time metric changes.
  2. In accordance with holography, the fermionic N-point functions with points at partonic 2-surfaces at the ends of CD are those assignable to a conformal field theory associated with the union of string world sheets involved. The perfect tensor is assignable to the fundamental 4-fermion scattering which defines the microscopy for the geometric 3-particle vertices having twistorial interpretation and also interpretation as algebraic operation.

    What is important is that fundamental fermion modes at string world sheets are labelled by conformal weights and standard model quantum numbers. No four-momenta nor color quantum numbers are involved at this level. Instead of propagator one has just unitary matrix describing the braiding.

  3. Note that four-momenta emerging in somewhat mysterious manner to stringy scattering amplitudes and mean the possibility to interpret the amplitudes at the particle level.
Twistorial and number theoretic constructions should correspond to particle level construction and also now tensor network description might work.
  1. The 3-surfaces are labelled by four-momenta besides other standard model quantum numbers but the possibility of reducing diagram to that involving only 3-vertices means that momentum degrees of freedom effectively disappear. In ordinary twistor approach this would mean allowance of only forward scattering unless one allows massless but complex virtual momenta in twistor diagrams. Also vertices with larger number of legs are possible by organizing large blocks of vertices to single effective vertex and would allow descriptions analogous to effective QFTs.
  2. It is highly non-trivial that the crucial factorization to perfect tensors at 3-vertices with unitary braiding matrices associated with legs connecting them occurs also now. It allows to split the inverses of fermion propagators into sum of products of two parts and absorb the halves to the perfect tensors at the ends of the line. The reason is that the inverse of massless fermion propagator (also when masslessness is understood in 8-D sense allowing M4 mass to be non-vanishing) to be express as bilinear of the bi-spinors defining the twistor representing the four-momentum. It seems that this is absolutely crucial property and fails for massive (in 8-D sense) fermions.

For the details see the new chapter Holography and Quantum Error Correcting Codes: TGD View or the article with the same title.



Holography and Quantum Error Correcting Codes: TGD View

Strong form of holography is one of the basic tenets of TGD, and I have been working with topological quantum computation in TGD framework with the braiding of magnetic flux tubes defining the space-time correlates for topological quantum computer programs. Flux tubes are accompanied by fermionic strings, which can become braided too and would actually represent the braiding at fundamental level. Also time like braiding of fermionic lines at light-like 3-surfaces and the braiding of light-like 3-surfaces themselvs is involved and one can talk about space-like and time-like braidings. These two are not independent being related by dance metaphor (think dancers at the parquette connected by threads to a wall generating both time like and space-like braidings). I have proposed that DNA and the lipids at cell membrane are connected by braided flux tubes such that the flow of lipids in lipid layer forming liquid crystal would induce braiding storing neural events to memory realized as braiding.

I have a rather limited understanding about error correcting codes. Therefore I was happy to learn that there is a conference in Stanford in which leading gurus of quantum gravity and quantum information sciences are talking about these topics. The first lecture that I listened was about a possible connection between holography and quantum error correcting codes. The lecturer was Preskill and the title of the talk was "Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence" (see this and this). A detailed representation can be found in the article of Preskill et al ).

The idea is that time= constant section of AdS, which is hyperbolic space allowing tessellations, can define tensor networks. So called perfect tensors are building bricks of the tensor networks providing representation for holography. There are three observations that put bells ringing and actually motivated this article.

  1. Perfect tensors define entanglement which TGD framework corresponds negentropic entanglement playing key role in TGD inspired theory of consciousness and of living matter.
  2. In TGD framework the hyperbolic tesselations are realized at hyperbolic spaces H3(a) defining light-cone proper time hyperboloids of M4 light-cone.
  3. TGD replaces AdS/CFT correspondence with strong form of holography.
Could one replace AdS/CFT correspondence with TGD version of holography?

One can criticize AdS/CFT based holography because it has Minkowski space only as a rather non-unique conformal boundary resulting from conformal compactification. Situation gets worse as one starts to modify AdS by populating it with blackholes. And even this is not enough: one can imagine anything inside blackhole interiors: wormholes connecting them to other blackholes, anything. Entire mythology of mystic creatures filling the white (or actually black) areas of the map. Post-modernistic sloppiness is the problem of recent day theoretical physics - everything goes - and this leads to inflationary story telling. Minimalism would be badly needed.

AdS/CFT is very probably mathematically correct. The question is whether the underlying conformal symmetry - certainly already huge - is large enough and whether its proper extension could allow to get rid of admittedly artificial features of AdS/CFT.

In TGD framework conformal symmetries are generalized thanks due to the metric 2-dimensionality of light-cone boundary and of light-like 3-surfaces in general. The resulting generalization of Kac-Moody group as super-symplectic group replaces finite-dimensional Lie group with infinite-dimensional group of symplectic transformations and leads to what I call strong form of holography in which AdS is replaced with 4-D space-time surface and Minkowski space with 2-D partonic 2-surfaces and their light-like orbits defining the boundary between Euclidian and Minkowskian space-time regions: this is very much like ordinary holography. Also imbedding space M4× CP2 fixed uniquely by twistorial considerations plays an important role in the holography.

AdS/CFT realization of holography is therefore not absolutely essential. Even better, its generalization to TGD involves no fictitious boundaries and is free of problems posed by closed time-like geodesics.

Perfect tensors and tensor networks realized in terms of magnetic body carrying negentropically entangled dark matter

Preskill et al suggest a representation of holography in terms of tensor networks associated with the tesselations of hyperbolic space and utilizing perfect tensors defining what I call negentropic entanglement. Also Minkowski space light-cone has hyperbolic space as proper time=constant section (light-cone proper time constant section in TGD) so that the model for the tensor network realization of holography cannot be distinguished from TGD variant, which does not need AdS at all.

The interpretational problem is that one obtains also states in which interior local states are non-trivial and are mapped by holography to boundary states are: holography in the standard sense should exclude these states. In TGD this problem disappears since the macroscopic surface is replaced with what I call wormhole throat (something different as GRT wormhole throat for which magnetic flux tube is TGD counterpart) can be also microscopic.

Physics of living matter as physics condensed dark matter at magnetic bodies?

A very attractive idea is that in living matter magnetic flux tube networks defining quantum computational networks provide realization of tensor networks realizing also holographic error correction mechanism: negentropic entanglement - perfect tensors - would be the key element! As I have proposed, these flux tube networks would define kind of central nervous system make it possible for living matter to experience consciously its biological body using magnetic body.

These networks would also give rise to the counterpart of condensed matter physics of dark matter at the level of magnetic body: the replacement of lattices based on subgroups of translation group with infinite number of tesselations means that this analog of condensed matter physics describes quantum complexity.

I am just a novice in the field of quantum error correction (and probably remain such) but from experience I know that the best manner to learn something new is to tell the story with your own words. Of course, I am not at all sure whether this story helps anyone to grasp the new ideas. In any case, if one have a new vision about physical world, the situation becomes considerably easier since creative elements enter to the story re-telling. How these new ideas could be realized in the Universe of TGD bringing in new features relating to the new views about space-time, quantum theory, and living matter and consciousness in relation to quantum physics.

For the details see the new chapter Holography and Quantum Error Correcting Codes: TGD View or the article with the same title.



Reactor antineutrino anomaly as indication for new nuclear physics predicted by TGD

A highly interesting new neutrino anomaly has emerged recently. The anomaly appears in two experiments and is referred to as reactor antineutrino anomaly. There is a popular article in Symmetry Magazine about the discovery of the anomalyf in Daya Bay experiment. Bee mentioned in Backreaction blog Reno experiment exhibiting the same anomaly. What happens that more antineutrinos with energies around 5 MeV are produced as should: the anomaly seems to extened to antineutrino energy about 6.3 MeV.

What makes me happy is that this anomaly might provide a new evidence for TGD based model of atomic nuclei.

  1. In nuclear string model nucleons are assumed to be bonded to nuclear strings by color magnetic flux tubes with quarks at ends. These nuclear quarks are different from hadronic quarks and can have different p-adic mass scales. Nuclear d quark is expected to be heavier than nuclear d quark and can decay to nuclear u quark by emission of a virtual W boson decaying to electron antineutrino pair. These decays are anomalous from the point of view of standard nuclear physics.
  2. The virtual W boson decaying to electron antineutrino pair in the anomalous region around 5 MeV should have energy which is two times neutrino energy since electron is relativistic. Since the upper boundary of anomalous region corresponds to about 6.3 MeV antineutrino energy, W energy should be below d-u mass difference, which must be therefore around 12.6 MeV . This is a highly valuable bit of information.
To proceed one can use p-adic mass calculations.
  1. The topological mixing of quark generations (characterized by handle number for partonic two surfaces) must make u and d quark masses almost but quite not identical in the lowest p-adic order. In the model for CKM mixing of hadronic quarks they would be identical in this order.
  2. p-Adic mass squared can be expressed as m2(q)/m_e2= 2(k-127)/2(s(q)+X(q))/(s(e)+X(e)), where s is positive integer and and X <1 a parameter characterizing the poorly known second order contribution in p-adic mass calculations. For topologically unmixed u and d quarks one has s(d)=8 and s(u)=5 =s(e). p=≈ 2k characterizes the p-adic scale of quark (for p-adic mass calculations see this).
Assume first that there is no breaking of isospin symmetry so that the p-adic mass scales of u and d type nuclear quarks are same.
  1. By using the information about the mass difference m(d)-m(u) < about 12.3 MeV and the above p-adic mass squared formula one can estimate the common p-adic mass scale of the nuclear quarks to be k=113. This is nothing but the p-adic mass scale assigned with nuclei and corresponds to Gaussian Mersenne MG,113=(1+i)113-1. Very natural!
  2. The maximal value 6.3 MeV for mass difference would be obtained for s(d)=8 and s(u)= 7 and X(e)=X(u)=X(d)=0 one obtains mass m(d)-m(u)= 5.49 MeV. Interestingly, figure 2 of the Reno article) shows a sharp downwards shoulder at 5.5 MeV.

    m(d)-m(u) =6.3 MeV can be reproduced accurately for X(d)/81/2- X(u)/71/2≈ .01. There are several manners to reproduce the estimate for d-u mass difference by varying second order contributions. Mixing with higher quark generations would occur for both u quark. The mass of nuclear u (d) quark would be (s(q)/5)1/2× 64 MeV, s(u)=2 (s(d)=8) for m(d)-m(u)=5.5 MeV. This mass is assumed to include the color magnetic energy of the color magnetic body of quark and would correspond to constituent quark mass rather than current quark mass, which is rather small.

    What is interesting that the sum of the u and d quark masses m(d)+m(u)= 144.95 MeV in absence of topological mixing is about 4 per cent larger than the charged pion mass m(π+)= 139.57 MeV. In any case, it is difficult to see how this large additional mass could be compensated.

In an alternative scenario, which is in accordance with the original picture, the isospin symmetry would be broken in the sense that p-adic mass scales of u and d would be different so that the mass difference would corresponds to the mass scale of (say) d quark and could be much smaller.
  1. For k(d)=119, s(d)= 10 (small topological mixing) and s(u)=5 (no topological mixing), k(u)=127 (say) one would have m(d)-m(u)=10.8 MeV so that neutrino energy would be below 5.4 MeV, which is near to the steep shoulder of the figure. One would have m(d)=11.3 MeV and m(u)=.5 MeV (electron mass) in absence of topological mixing. Now k(d) is however not prime as the strongest form of p-adic length scale hypothesis demands. k(u)=127 is only the first guess. Also k(u)=137 corresponding to atomic length scale can be considered.
  2. The accepted values for hadronic current quark masses deduced from lattice calculations are about m(u)=2 MeV for m(d)=5 MeV and smaller than the values deduced above suggesting the interpretation of the masses estimate above as nuclear constituent quark masses.
  3. Beta stable configurations would correspond to u-ubar bonds with total energy about 2m(e)=1 MeV, which is consistent with the general view about nuclear binding energy scale. Also exotic nuclear excitations containing charged color bonds with quark or antiquark or both transformed to d type state are predicted. The first guess for the excitation energy of charged color bond is m(d)-m(u) ≈ 10.8 MeV. Each charged color bond increases the nuclear charge by one unit but proton and neutron numbers remain the same as for the original nucleus: I have called these states exotic nuclei (see this).
  4. The so called leptohadron hypothesis postulates color excitations of leptons having as bound states leptopions with mass equal to 2m(e) in good approximation. An alternative option would replace colored leptons with quarks and assumes that unmixed u quark has electron mass and their production in heavy ion collisions would be natural if they appear as color bonds between nucleons. This would fix s(u) to s(u)=5 (no topological mixing).
  5. X rays from Sun have anomalous effects on the observed nuclear decay rate with a periodicity of year and with magnitude varying like inverse of the distance from the Sun with which also solar X ray intensity varies: this is known as GSI anomaly. I have proposed earlier that the energy scale of the excitations of nuclear color bonds is 1-10 keV on basis of these findings (see this). Nuclei could be in exited states with excitation energies in 1-10 keV range and the X ray radiation would affect the fraction of excited states thus changing also the average decay rates.

    One can try to understand the keV energy scale to the 1 MeV energy scale of beta stable color bonds in terms of fractal scaling. Above it was found that for k=113 charged color bond would have energy m(d)+m(u)= 144.95 MeV if quarks are free. Since the actual charged pion mass is m(π+)= 139.57 MeV, the pionic binding energy would be 5.38 MeV which makes about 3.7 per cent of the total mass. If one applies same fractal logic to the k=127 color bond with 2m(u)= 1 MeV, one obtains 37 keV, which has somewhat too high value. The Coulombic interaction is attractive between u and ubar in k=127 pion with broken isospin symmetry. The naive perturbative estimate is as α/me≈ 3.6 keV reducing the estimate to 34.4 keV. The fact that π+ has positive Coulombic interaction energy reduces the estimate further but this need not be enough.

    For k(u)=137 (atomic length scale) one would obtain binding energy scale, which is by factor 1/32 lower and about 1.2 keV. The simplest model for color bond would be as harmonic oscillator predicting multiples of 1.2 keV as excitation energies. This would conform with the earlier suggestion that color magnetic flux tubes are loops with size of even atom. This cold also explain the finding that the charge radius of proton is not quite what it is expected to be.

For the background see the chapter Nuclear string model.



Quantization of thermal conductance and quantum thermodynamics

Waterloo physicists discover new properties of superconductivity is the title of article popurazing the article of David Hawthorn, Canada Research Chair Michel Gingras, doctoral student Andrew Achkar and post-doctoral student Zhihao Hao published in Science.

There is a dose of hype involved. As a matter of fact, it has been known for years that electrons flow along stripes, kind of highways in high Tc superconductors: I know this quite well since I have proposed TGD inspired model explaining this (see this and this )!

The effect is known as nematicity and means that electron orbitals break lattice symmetries and align themselves like a series of rods. Nematicity in long length scales occurs a temperatures below the critical point for super-conductivity. In above mentioned cuprate CuO2 is studied. For non-optimal doping the critical temperature for transition to macroscopic superconductivity is below the maximal critical temperature. Long length scale nematicity is observed in these phases.

In second article it is however reported that nematicity is in fact preserved above critical temperature as a local order -at least up to the upper critical temperature, which is not easy to understand in the BCS theory of superconductivity. One can say that the stripes are short and short-lived so that genuine super-conductivity cannot take place.

These two observations yield further support for TGD inspired model of high Tc superconductivity and bio-superconductivity. It is known that antiferromagnetism is essential for the phase transition to superconductivity but Maxwellian view about electromagnetism and standard quantum theory do not make it easy to understand how. Magnetic flux tube is the first basic new notion provided by TGD. Flux tubes carry dark electrons with scaled up Planck constant heff =n×h: this is second new notion. This implies scaling up of quantal length scales and in this manner makes also super-conductivity possible.

Magnetic flux tubes in antiferromagnetic materials form short loops. At the upper critical point they however reconnect with some probability to form loops with look locally like parallel flux tubes carrying magnetic fields in opposite directions. The probability of reverse phase transition is so large than there is a competion. The members of Cooper pairs are at parallel flux tubes and have opposite spins so that the net spin of pair vanishes: S=0. At the first critical temperature the average length and lifetime of flux tube highways are too short for macroscopic super-conductivity. At lower critical temperature all flux tubes re-connect permantently average length of pathways becomes long enough.

This phase transition is mathematically analogous to percolation in which water seeping through sand layer wets it completely. The competion between the phases between these two temperatures corresponds to quantum criticality in which phase transitions heff/h=n1 ←→n2 take place in both directions (n1 =1 is the most plausible first guess). Earlier I did not fully realize that Zero Energy Ontology provides an elegant description for the situation (see this and this). The reason was that I though that quantum criticality occurs at single critical temperature rather than temperature interval. Nematicity is detected locally below upper critical temperature and in long length scales below lower critical temperature.

During last years it has become clear that condensed matter physicists are discovering with increasing pace the physics predicted by TGD . Same happens in biology. It is a pity that particle physicists have missed the train so badly. They are still trying to cook up something from super string models which have been dead for years. The first reason is essentially sociological: the fight for funding has led to what might be politely called "aggressive competion". Being the best is not enough and there is a temptation to use tricks, which prevent others showing publicly that they have something interesting to say. ArXiv censorship is excellent tool in this respect. Second problem is hopelessly narrow specialization and technicalization: colleague can be defined by telling the algorithms that he is applying. Colleagues do not see physics for particle physics - or even worse, for "physics" or superstrings and branes in 10,11, or 12 dimensions.

See the chapter Super-Conductivity in Many-Sheeted Space-Time.



Quantization of thermal conductance and quantum thermodynamics

The finnish research group led by Mikko Möttönen working at Aalto University has made several highly interesting contributions to condensed matter physics during last years (see the popular articles about condensed matter magnetic monopoles and about tying quantum knots: both contributions are interesting also from TGD point of view). This morning I read about a new contribution published in Nature ).

What has been shown in the recent work is that quantal thermal conductivity is possible for wires of 1 meter when the heat is transferred by photons. This length is by a factor 104 longer than in the earlier experiments. The improvement is amazing and the popular article tells that it could mean a revolution in quantum computations since heat spoling the quantum coherence can be carried out very effectively and in controlled manner from the computer. Quantal thermal conductivity means that the transfer of energy along wire takes place without dissipation.

To understand what is involved consider first some basic definitions. Thermal conductivity k is defined by the formula j= k∇ T, where j is the energy current per unit area and T the temperature. In practice it is convenient to use thermal power obtained by integrating the heat current over the transversal area of the wire to get the heat current dQ/dt as analog of electric current I. The thermal conductance g for a wire allowing approximation as 1-D structure is given by conductivity divided by the length of the wire: the power transmitted is P= gΔ T, g=k/L.

One can deduce a formula for the conductance at the the limit when the wire is ballistic meaning that no dissipation occurs. For instance, superconducting wire is a good candidate for this kind of channel and is used in the measurement. The conductance at the limit of quantum limited heat conduction is an integer multiple of conductance quantum g0= kB2π2T/3h: g=ng0. Here the sum is over parallel channels. What is remarkable is quantization and independence on the length of the wire. Once the heat carriers are in wire, the heat is transferred since dissipation is not present.

A completely analogous formula holds true for electric conductance along ballistic wire: now g would be integer multiple of g0=σ/L= 2e2/h. Note that in 2-D system quantum Hall conductance (not conductivity) is integer (or more generally some rational) multiple of σ0= e2/h. The formula in the case of conductance can be "derived" heuristically from Uncertainty Principle Δ EΔ t=h plus putting Δ E = eΔ V as difference of Coulomb energy and Δ t= e/I=e L/ΔV=e/g0.

The essential prerequisite for quantal conduction is that the length of the wire is much shorter than the wavelength assignable to the carrier of heat or of thermal energy: λ>> L. It is interesting to find how well this condition is satisfied in the recent case. The wavelength of the photons involved with the transfer should be much longer than 1 meter. An order of magnitude for the energy of photons involve and thus for the frequency and wavelength can be deduced from the thermal energy of photons in the system. The electron temperatures considered are in the range of 10-100 mK roughly. Kelvin corresponds to 10-4 eV (this is more or less all that I learned in thermodynamics course in student days) and eV corresponds to 1.24 microns. This temperature range roughly corresponds to thermal energy range of 10-6-10-5 eV. The wave wavelength corresponding to maximal intensity of blackbody radiation is in the range of 2.3-23 centimeters. One can of course ask whether the condition λ >> L=1 m is consistent with this. A specialist would be needed to answer this question. Note that the gap energy .45 meV of superconductor defines energy scale for Josephson radiation generated by super-conductor: this energy would correspond to about 2 mm wavelength much below one 1 meter. This energy does not correspond to the energy scale of thermal photons.

I am of course unable to say anything interesting about the experiment itself but cannot avoid mentioning the hierarchy of Planck constants. If one has E= hefff, heff=n× h instead of E= hf, the condition λ>> L can be easily satisfied. For superconducting wire this would be true for superconducting magnetic flux tubes in TGD Universe and maybe it could be true also for photons, if they are dark and travel along them. One can even consider the possibility that quantal heat conductivity is possible over much longer wire lengths than 1 m. Showing this to be the case, would provide strong support for the hierarchy of Planck constants.

There are several interesting questions to be pondered in TGD framework. Could one identify classical space-time correlates for the quantization of conductance? Could one understand how classical thermodynamics differs from quantum thermodynamics? What quantum thermodynamics could actually mean? There are several rather obvious ideas.

  1. Space-time surfaces are preferred extremals of Kähler action satisfying extremely powerful conditions boiling down to strong form of holography stating that string world sheets and partonic 2-surfaces basically dictate the classical space-time dynamics. Fermions are localized to string world sheets from the condition that electromagnetic charge is well-defined for spinor modes (classical W fields must vanish at the support of spinor modes).

    This picture is blurred as one goes to GRT-standard model limit of TGD and space-time sheets are lumped together to form a region of Minkowski space with metric which deviates from Minkowski metric by the sum of the deviations of the induced metrics from Minkowski metric. Also gauge potentials are defined as sums of induced gauge potentials. Classical thermodynamics would naturally correspond to this limit. Obviously the extreme simplicity of single sheeted dynamics is lost.

  2. Magnetic flux tubes to which one can assign space-like fermionic strings connecting partonic 2-surfaces are excellent candidates for the space-time correlates of wires and at the fundamental level the 1-dimensionality of wires is exact notion at the level of fermions. The quantization of conductance would be universal phenomenon blurred by the GRT-QFT approximation.

    The conductance for single magnetic flux tube would be the conductance quantum determined by preferred extremal property, by the boundary conditions coded by the electric voltage for electric conduction and by the temperatures for heat conduction. The quantization of conductances could be understood in terms of preferred extremal property. m-multiple of conductance would correspond to m flux tubes defining parallel wires. One should check whether also fractional conductances coming as rational m/n are possible as in the case of fractional quantum Hall effect and assignable to the hierarchy of Planck constants heff=n × h as the proportionality of quantum of conductance to 1/h suggests.

  3. One can go even further and ask whether the notion of temperature could make sense at quantum level. Quantum TGD can be regarded formally as a "complex square root" of thermodynamics. Single particle wave functions in Zero Energy Ontology (ZEO) can be regarded formally as "complex square roots" of thermodynamical partition functions and the analog of thermodynamical ensemble is realized by modulus squared of single particle wave function.

    In particular, p-adic thermodynamics used for mass calculations can be replaced with its "complex square root" and the p-adic temperature associated with mass squared (rather than energy) is quantized and has spectrum Tp= log(p)/n using suitable unit for mass squared (see this).

    Whether also ordinary thermodynamical ensembles have square roots at single particle level (this would mean thermodynamical holography with members of ensemble representing ensemble!) is not clear. I have considered the possibility that cell membrane as generalized Josephson junction is describable using square root of thermodynamics (see this). In ZEO this would allow to describe as zero energy states transitions in which initial and final states of event corresponding to zero energy state have different temperatures.

    Square root of thermodynamics might also allow to make sense about the idea of entropic gravity, which as such is in conflict with experimental facts (see this).

See the article Quantization of thermal conductance and quantum thermodynamics or the chapter Criticality and dark matter.



Could cold fusion solve some problems of the standard view about nucleosynthesis?

The theory of nucleosynthesis involves several uncertainties and it is interesting to see whether interstellar cold fusion could provide mechanisms allowing improved understanding of the observed abundances. There are several problems: D abundance is too low unless one assumes the presence of dark matter/energy during Big Bang nucleosynthesis (BBN); there are two Lithium anomalies; there is evidence for the synthesis of boron during BBN; for large redshifts the observed metallic abundances are lower than predicted. The observed abundances of light nuclei are higher than predicted and require that so called cosmic ray spallation producing them via nuclear fission induced by cosmic rays. The understanding of abundances of nuclei heavier than Fe require supernova nucleosynthesis: the problem is that supernova 1987A did not provide support for the r-process.

The idea of dark cold fusion could be taken more seriously if it helped to improve the recent view about nucleosynthesis. In and additional section to the article Cold fusion again I try to develop a systematic view about how cold fusion could help in these problems. I take as a starting point the earlier model for cold dark fusion discussed in the above link and also in blog postings: see this, this, and this. This model could be seen as generalization of supernova nucleosynthesis in which dark variant of neutron and proton capture gives rise to more massive isotopes. Also a variant allowing the capture of dark alpha particle can be considered. Besides this pure standard physcis modification of Big Bang nucleosynthesis is proposed based on the resonant alpha capture of 7Li allowing to produce more Boron and perhaps explain second Li anomaly.

See the article Cold fusion again or the chapter Cold fusion again.



Bacteria behave like spin system: Why?

In Physorg there was an interesting article titled Bacteria streaming through a lattice behave like electrons in a magnetic material. The popular article tells arbout article by Dunkel et al with title Ferromagnetic and antiferromagnetic order in bacterial vortex lattices. The following summarizes what has been studied and observed.

  1. The researchers have studied a square lattice of about 100 wells with well radius below 50 microns and well depth about 18 microns. The wells are connected by thin channels. Also triangular lattice has been studied.
  2. Below a critical radius about 35 microns an ordered flow is generated. The flow involves interior flow and edge flow in opposite direction consisting of single bacterium layer. One can understand this from angular momentum conservation. The coherence of this flow is however surprising. If one believes that each bacterium in principle chooses its swimming direction, one must understand what forces bacteria to select the same swimming direction.
  3. Below a critical radius of channel about d=4 microns the flow directions in the neighboring wells are opposite for the square lattice. One has superposition of lattice and its dual with opposite flow directions. In the case of triangular lattice analogous cituation is encountered. In this situation there is no flow between the wells but there is an interaction. The minimization of dissipative losses requires minimization of velocity gradients inside channels. made possible by same local flow direction for the edge currents of neighboring wells.
  4. Above the critical radius the flow changes its character. The flows synchronize and the interior flows rotate in the same direction as do also edge flows which occur also between the neighboring channels and give rise to closed flows around the boundaries of square like regions behind wells having larger scale. This flow pattern is consistent with angular momentum conservation: the angular momenta of lattice and its dual cancel each other.
  5. The phase transition is analogous to that from antiferromagnetism to ferromagnetism. The total angular momenta of bacteria, their colonies, are analogous to spins. The situation can be modelled as 2-D Ising model consisting of lattice of spins with nearest neighbor interactions. Usually the spins are assigned with electrons but now they are assigned with bacteria.
This raises interesting questions. Bacteria swim by using flagellae. They can decide the swimming direction and control it by controlling the flagellae. Bacteria are living organisms and have a free will. Why would bacterium colory behave like quantal many-spin system. What happens when the swimming direction becomes same for the bacteria inside single well: does the colony become an entity with collective consciousness and do bacteria obey "social pressure". Does this happen also for the colony formed by these colonies in transition to ferromagnetism like state?

If one takes TGD inspired quantum biology as starting point, one can represent more concrete questions and possible answers to them.

  1. Magnetic body (MB) controls the biological body (BB) be it organism or part of it. MB contains dark matter as cyclotron Bose-Einstein condensates of bosonic ions. Pairs of parallel flux tubes could also contain members of Cooper pairs whose spin depends on whether the magnetic fields at flux tubes are parallel or antiparallel.
  2. What could be the mechanism of control? MB is assumed to send dark photon signals from MB to biological body to control it and an attractive idea is that control is by angular momentum conservation. Since the angular momentum transfer involve is due to a phase transition analogous to the change of the direction of magnetization or generation of magnetization the angular momentum transfer is large irrespective of the value of unit of angular momentum for dark photon (see discussion below). This large angular momentum could be transformed to angular momentum of ordinary matter and in recent case be responsible for generating the rotational motion of bacterium or larger unit.

    The transfer of dark photons induced by a phase transition changing the direction of dark magnetization might thus induce a large transfer of angular momentum to BB and generate macroscopic rotation. If this were the case the rotational state of dark MB of bacterium would serve as a template for bacterium.

    The bacterium colony associated with the well below critical size would correspond to super-organism having MB whose rotational state could serve as template for the bacterial MBs in turn serving as a similar template for the bacteria.

  3. If the net angular momenta of MB and corresponding BB (bacterium, well colony, colony of these) vanish separately, the model is consistent with the model of the article in which local considerations determine the rotational directions. In this case the MBs of well colonies would behave like spins with nearest neighbor interactions.

    One can also consider the possibility that at quantum criticality long range quantum fluctuations occur and the local equilibrium conditions do not hold true. Even more, the net angular momenta of MB and BB would cancel each other but would not necessarily separately. This would imply apparent non-conservation of angular momentum at the level of bacterium colony at criticality and might allow to find experimental support for the notion of magnetic body. The proof of MB carrying dark matter as a concept would be very much like that of neutrino the existence of which was deduced from apparenent energy non-conservation in beta decays.

The model has a problem to worry about. I still am not quite sure whether heff/h=n means that that the unit of spin is scaled up by n or that a fractionization of angular momentum by 1/n for single sheet of associated n-fold covering of space-time surface takes place. The control mechanism based on angular momentum conservation could however be at work in both cases. The option assuming fractionization seems to be the realistic one and only this will be considered in the following. Reader can ponder the option assuming scaled up unit of angular momentum (the scaling up of angular momentum of dark photon is not in coherence with the assumption that dark photon has same four-momentum as ordinary photon to which it should be able to transform).
  1. Consider first the simplest variant for the effective fractionization of quantum numbers. If one has n-fold covering singular at the boundaries of CD then spin fractionization can be considered such that one has effectively n spin 1/n photons - one per sheet - and the net spin is just the standard spin. This picture fits with the vision that the n-fold covering means that one must make n full 2π turns before turning to the original point at space-time sheet: this allows at space-time surface wave functions with fractional spin which would be many-valued in Minkowski space. Similar fractionization would occur to other quantum numbers such as four-momentum so that net four-momentum would not change. The wavelength of these building bricks of dark photon analogous to Bose-Einstein condensate have frequencies scaled down by factor 1/n.

    In this case the direct decay to single ordinary photon interpreted as biophoton is allowed by conservation laws. Of course, also decays to several ordinary photons are possible. The decay to a bunch of n ordinary photons with total momenta 1/n times that of dark photon is possible if the spins of ordinary photons sum up to the spin of dark photon.

    The total angular momentum liberated from the cyclotron Bose-Einstein condensate spin could be transferred to spin of ordinary particles, say proton or ion for which the natural scale of orbital angular momentum is much larger (proportional to the rest energy). Simple order of magnitude estimate for orbital angular momentum with respect to the symmetry axis of possibly helical magnetic flux tube shows that in this case the spin could be transformed to angular momentum in the scale of organism and to the motion of organism itself.

    Note that dark photon could also decay to a bunch of ordinary photons with momentum scaled down by 1/n since the spins of the photons can sum up to spin 1.

  2. A many-sheeted analog of second quantization generalizes the above picture. The n space-time sheets can be labelled by an integer m =1,...,n defining an analog of discrete position variable. One can second quantize the fundamental fermions in this discrete space so that one has not only the ordinary many fermion states with N=0/1 fermions in given mode but also states with fractionization of fermion number and other quantum numbers by q= m/n< 1 in a given mode. This would induce fractionization of bosons identified as fractional many-fermion states.

    Particle with fractional spin cannot decay directly to ordinary particle unless one has m=n: this correspond to the first option. Fractional particles characterized by m/n and (n-m)/n can however fuse to ordinary particle. An attractive additional hypothesis is that the net quantum numbers are integer valued.

    I have discussed the possibility of molecular sex: the opposite molecular sexes would have fractional charges summing up to ordinary charges. If magnetic bodies with opposite molecular sexes are paired they have ordinary total quantum numbers and can control ordinary matter by the proposed mechanism based on conservation of angular momentum (or some other charges). Dark matter would serve as template for ordinary matter and dark phase transitions would induce those of visible matter. The proposal that DNA, RNA, tRNA, and amino-acids are accompanied by dark proton sequences (or more general dark nuclei) could realize this picture. DNA double strand could be seen as an outcome of a molecular marriage in this framework! At higher level brain hemispheres might be a seen as a dark matter marriage. This picture can be also seen as emergence of symbols and dynamics based on symbol sequences at the molecular level with molecular marriage making possible very precise selection rules.

See the article Bacteria behave like spin systems: Why? or the chapter Criticality and dark matter.



Quantum phase transitions and 4-D spin glass energy landscape

TGD has led to two descriptions for quantum criticality. The first one relies on the notion of 4-D spin glass degeneracy and emerged already around 1990 when I discovered the unique properties of Kähler action. Second description relies on quantum phases and quantum phase transitions and I have tried to explain my understanding about it above. The attempt to understand how these two approaches relate to each other might provide additional insights.

  1. Vacuum degeneracy of Kähler action is certainly a key feature of TGD and distinguishes it from all classical field theories. Small deformations of the vacua probably induced by gluing of magnetic flux tubes (primordially cosmic strings) to these vacuum space-time sheets deforms them slightly and would give rise to TGD Universe analogous to 4-D spin glass. The challenge is to relate this description to the vision provided by quantum phases and quantum phase transitions.
  2. In condensed matter physics one speaks of fractal spin glass energy landscape with free energy minima inside free energy minima. This landscape obeys ultrametric topology: p-adic topologies are ultra metric and this was one of the original motivations for the idea that p-adic physics might be relevant for TGD. Free energy is replaced with the sum of Kähler function - Kähler action of Euclidian space-time regions and imaginary Kähler action from Minkowskian space-time regions.
  3. In the fractal spin glass energy landscape there is an infinite number of minima of free energy. The presence of several degenerate minima leads to what is known as frustration. In TGD framework all the vacuum extremals have the same vanishing action so that there is infinite degeneracy and infinite frustration (also created by the attempt to understand what this might imply physically!). The diffeomorphisms of M4 and symplectic transformations of CP2 map vacuum extremals to each other and acts therefore as gauge symmetries. Symplectic transformations indeed act as U(1) gauge transformations. Besides this each Lagrangian sub-manifold of CP2 defines its own space of vacuum-extremals as orbit of this symplectic group.

    As one deforms vacuum extremals slightly to non-vacuum extremals, classical gravitational energy becomes non-vanishing and Kähler action does not vanish anymore and the above gauge symmetries become dynamical symmetries. This picture serves as a useful guideline in the attempts to physically interpret. In TGD inspired quantum biology gravitation plays indeed fundamental role (gravitational Planck constant hgr).

  4. Can one identify a quantum counterpart of the degeneracy of extremals? The notion of negentropic entanglement (NE) is cornerstone of TGD. In particular, for maximal negentropic entanglement density matrix is proportional to unit matrix so that states are degenerate in the same sense as the states with same energy in thermodynamics. Energy has Kähler function as analogy now: hence the degeneracy of density matrix could correspond to that for Kähler function. More general NE corresponds to algebraic entanglement probabilities and allows to identify unique basis of eigenstates of density matrix. NE is favored by NMP and serves key element of TGD inspired theory of consciousness.

    In standard physics degeneracy of density matrix is extremely rare phenomenon as is also entanglement with algebraic entanglement probabilities. These properties are also extremely unstable. TGD must be somehow special. The vacuum degeneracy of Kähler action indeed distinguishes TGD from quantum field theories, and an attractive idea is that the degeneracy associated with NE relates to that for extremals of Kähler action. This is not enough however: NMP is needed to stabilize NE and this occurs only for dark matter (heff/h>1 equals to the dimension of density matrix defining NE).

    The strong form of holography takes this idea further: 2-D string world sheets and partonic 2-surfaces are labelled by parameters, which belong to algebraic extension of rationals. This replaces effectively infinite-D WCW with discrete spaces characterized by these extensions and allows to unify real and p-adic physics to adelic physics. This hierarchy of algebraic extensions would be behind various hierarchies of quantum TGD, also the hierarchy of deformations of vacuum extremals.

  5. In 3-D spin glass different phases assignable to the bottoms of potential wells in the fractal spin energy landscape compete. In 4-D spin glass energy of TGD also time evolutions compete, and degeneracy and frustration chacterize also time evolutions. In biology the notions of function and behavior corresponds to temporal patterns: functions and behaviors are fighting for survival rather than only organisms.

    At quantum level the temporal patterns would correspond to phase transitions perhaps induced by quantum phase transitions for dark matter at the level of magnetic bodies. Phase transitions changing the value of heff would define correlates for "behaviors and the above proposed description could apply to them.

  6. Conformal symmetries (the shorthand "conformal is understood in very general sense here) allow to understand not only quantum phases but also quantum phase transitions at fundamental level and "transitons transforming according to representations of Kac-Moody group or gauge group assignable to the inclusion of hyperfinite factors characterized by the integer m in heff(f)= m× heff(i) could allow precise quantitative description. Fractal symmetry breaking leads to conformal sub-algebra isomorphic with the original one

    What could this symmetry breaking correspond in spin energy landscape? The phase transition increasing the dynamical symmetry leads to a bottom of a smaller well in spin energy landscape. The conformal gauge symmetry is reduced and dynamical symmetry increased, and the system becomes more critical. Indeed, the smaller the potential well, the more prone the system is for being kicked outside the well by quantum fluctuations. The smaller the well, the large the value of heff. At space-time level this corresponds to a longer scale. At the level of WCW (4-D spin energy landscape) this corresponds to a shorter scale.

For backbround see the article What's new in TGD inspired view about phase transitions? or the chapter Criticality and dark matter.



What's New In TGD Inspired View About Phase Transitions?

The comment of Ulla mentioned Kosterlitz-Thouless phases transition and its infinite order. I am not a condensed matter physicist so that my knowledge and understanding are rather rudimentary and I had to go to Wikipedia. I realized that I have not payed attention to the classification of types of phase transitions, while speaking of quantum criticality. Also the relationship of ZEO inspired description of phase transitions to that of standard positive energy ontology has remained poorly understood. In the following I try to represent various TGD inspired visions about phase transitions and criticality in organized manner and relate them to the standard description.

About thermal and quantum phase transitions

It is good to beging with something concrete. Wikipedia article lists examples about different types of phase transitions. These phase transitions are thermodynamical.

  1. In first order phase thermodynamical phase transitions heat is absorbed and phases appear as mixed. Melting of ice and boiling of water represent the basic examples. Breaking of continuous translation symmetry occurs in crystallization and symmetry is smaller at low temperature. One speaks of spontaneous symmetry breaking: thermodynamical fluctuations are not able to destroy the configuration breaking the symmetry.
  2. Second order phase transitions are also called continuous and they also break continuous symmetries. Susceptility diverges, correlation range is infinite, and power-law behaviour applies to correlations. Ferromagnetic, super-conducting, and superfluid transitions are examples. Conformal field theory predics power-law behavior and infinite correlation length. Infinite susceptility means that system is very sensitive to external perturbations. First order phase transition becomes second order transition at critical point. Here the reduction by strong form of holography might make sense for high Tc superconductors at least (they are effectively 2-D).
  3. Infinite order phase transitions are also possible. Kosterlitz-Thouless phase transition occurring in 2-D systems allowing conformal symmetries represents this kind of transition. These phase transitions are continuous but do not break continuous symmetries as usually.
  4. There are also liquid-glass phase transitions. Their existence is hypothetical. The final state depends on the history of transition. Glass state itself is more like an ongoing phase transition rather than phase.
These phase transitions are thermal and driven by thermal fluctuations. Also quantum phase transitions are possible.
  1. According to the standard definition they are possible only at zero temperature and driven by quantum fluctuations. For instance, gauge coupling strength would be analogous to quantum temperature. This is a natural definition in standard ontology, in which thermodynamics and quantum theory are descriptions at different levels.

    Quantum TGD can be seen as a square root of thermodynamics in a well-defined sense and it is possible to speak about quantum phase transitions also at finite temperature if one can identify the temperature like parameter characterizing single particle states as a kind of holographic representations of the ordinary temperature.

  2. The traces of quantum phase transitions are argued to be visible also at finite temperatures if the energy gap ℏ×ω is larger than the thermal energy: ℏω>T. In TGD framework Planck constant has a spectrum heff/h= n and allows very large values. This allows quantum phase transitions even at room temperature and TGD inspired quantum biology relies crucially on this. What is of special interest that also ordinary thermal phase transitions might be accompanied by quantum phase transitions occurring at the level of magnetic body and perhaps even inducing the ordinary thermal phase transition.
  3. Quantum critical phase transitions occur at critical point and are second order phase transitions so that susceptibility diverges and system is highly sensitive to perturbations and so in wide range around critical temperature (zero in standard theory). Long range fluctuations are generated and this conforms with the TGD vision about the role of large heff phases and generalized conformal symmetry: which also implies that the region around criticality is wide (exponentially decaying correlations replaced with power law correlations).

Some examples of quantum phase transition like phenomena in TGD framework

TGD suggests some examples of quantum phase transition like phenomena.

  1. Bose-Einstein (BE) condensate consisting of bosons in same state would represent a typical quantum phase. I have been talking a lot about cyclotron BE condensates at dark magnetic flux tubes. The bosonic particles would be in the same cyclotron state. One can consider also the analogs of Cooper pairs with members at flux tubes of a pair of parallel flux tubes with magnetic fields in same or opposite direction. One member at each tube having spin 1 or zero. This would give rise to high Tc superconductivity.
  2. One natural mechanism of quantum phase transition would be BE condensation to a new single particle state. The rate for an additions of new particle to condensate is proportional to N+1 and disappearance of particle from it to N, where N is the number of particles in condensate. The net rate for BE condensation is difference of these and non-vanishing.

    Quantum fluctuations induce phase transition between states of this condensate at criticality. For instance, cyclotron condensate could make a spontaneous phase transition to a lower energy state by a change of cyclotron energy state and energy would be emitted as a dark cyclotron radiation. This kind of dark photon radiation could in turn induce cyclotron transition to a higher cyclotron state at some other flux tube. If NMP holds true it could pose restrictions for the occurrence of transitions since one expects that negentropy is reduced. The transitions should involve negentropy transfer from the system.

    The irradiation of cyclotron BE condensate with some cyclotron frequency could explain cyclotron phase transition increasing the energy of the cyclotron state. This kind of transition could explain the effecs of ELF em fields on vertebrate brain in terms of cyclotron phase transition and perhaps serving as a universal communication and control mechanism in the communications of the magnetic body with biological body and other magnetic bodies. The perturbation of microtubules by an oscillating voltage (see this) has been reported by the group of Bandyonophyay to induce what I have interpreted as quantum phase transition (see this).

    External energy feed is essential and dark cyclotron radiation or generalized Josephson radiation from cell membrane acting as generalized Josephson junction and propagating along flux tubes could provide it. Cyclotron energy is scaled up by heff/h and would be of the order of biophoton energy in TGD inspired model of living matter and considerably above thermal energy at physiological temperature.

  3. Also quantum phase transitions affecting the value of heff are possible. When heff is reduced and frequency is not changed, energy is liberated and the transition proceeds without external energy feed (NMP might pose restrictions). Another option is increase of heff and reduce the frequency in such a manner that that single particle energies are not changed. One can imagine many other possibilities since also p-adic length scale leading to a change of mass scale could change. A possible biological application is to the problem of understanding how biomolecules find each other in the molecular soup inside cell so that catalytic reactions can proceed. Magnetic flux tubes pairs connecting the biomolecules would be generated in the reconnection of U-shaped tentacle like flux tubes associated with the reactants, and the reduction of heff for the flux tube pair would contract it and force the biomolecules near each other.
  4. The model for cold fusion relies on a process, which is analogous to quantum phase transition. Protons from the exclusion zones (EZs) of Pollack are transferred to dark protons at magnetic flux tubes outside EZ and part of dark protons sequences transform by dark weak decays and dark weak boson exchanges to neutrons so that beta stable dark nuclei are obtained with binding energy much smaller than nuclear binding energy. This could be seen as dark nuclear fusion and quantum analog of the ordinary thermal nuclear fusion. The transformation of dark nuclei to ordinary nuclei by heff reducing phase transition would liberate huge energy if allowed by NMP and explain the reported biofusion.
  5. Energetics is clearly an important factor (in ordinary phase transitions for open system thermal energy feed is present). The above considerations assume that ordinary positive energy ontology effectively applies. ZEO allows to consider a more science fictive possibility. In ZEO energy is conserved when one considers single zero energy state as a time evolution of positive energy state. If single particle realizes square root of thermodynamics, one has superposition of zero energy states for which single particle states appear as pairs of positive and negative energy states with various energies: each state in superposition respects energy conservation. In this kind of situation one can consider the possibility that temperature increases and average single particle energy increases. In positive energy ontology this is impossible without energy feed but in ZEO it is not excluded. I do not understand the situation well enough to decide whether some condition could prevent this. Note however that in TGD inspired cosmology energy conservation holds only in given scale (given CD) and apparent energy non-conservation would result by this kind of mechanism.

Question related to TGD inspired description of phase transitions

The natural questions are for instance following ones.

  1. The general classification of thermodynamical phase transitions is in terms of order: the order of the lowest discontinuous derivative of the free energy with respect to some of its arguments. In catastrophe theoretic description one has a hierarchy of criticalities of free energy as function of control variables (also other behavior variables than free energy are possible) and phase transitions with phase transitions corresponding to catastrophe containing catastrophe.... such that the order increases. For intance, for cusp catatrophe one has lambda-shaped critical line and critical point at its tip. Thom's catastrophe theory description is mathematically very attractive but I think that it has problems at experimental side. It indeed applies to flow dynamics defined by a gradient of potential and thermodynamics is something different.

    In TGD framework the sum of Kähler function defined by real Kähler action in Euclidian space-time regions and imaginy Kähler action from Minkowskian space-time regions defining a complex quanty replaced free energy. This is in accordance with the vision that quantum TGD can be seen as a complex square root of thermodynamics. Situation is now infinite-dimensional and catastrophe set would be also infinite-D. The hierarchy of isomorphic super-conformal algebras defines an infinite hierarchy of criticalities with levels labelled by Planck constants and catastrophe theoretic description seems to generalize.

    Does this general description of phase transitions at the level of dark magnetic body (field body is more general notion but I will talk about magnetic body (MB) in the sequel) allow to understand also thermodynamical phase transitions as being induced from those for dark matter at MB?

  2. Quantum TGD can be formally regarded as a square root of thermodynamics. Does this imply "thermal holography" meaning that single particle states can represent ensemble state as square root of the thermal state of ensemble. Could one unify the notions of thermal and quantum phase transition and include also the phase transitions changing heff? Could MB make this possible?
  3. How does the TGD description relate to the standard description? TGD predicts that conformal gauge symmetries correspond to a fractal hierarchy of isomorphic conformal sub-algebras. Only the lowest level with maximal conformal symmetry matters in standard theory. Are the higher "dark" levels something totally new or do they appear in the description of also ordinary phase transitions? What is the precise role of symmetries and symmetry changes in TGD description and is this consistent with standard description. Here the notion of field body is highly suggestive: the dynamics of field body could induce the dynamics of ordinary matter also in phase transitions.
There is a long list of questions related to various aspects of TGD based description of phase transitions.
  1. In TGD framework NMP a applying to single system replaces second law applying to ensemble as fundamental description. Second law follows from the randomness of the state function reduction for ordinary matter and in long length and time scales from the ultimate occurrence of state function reductions to opposite boundary of CD in ensemble. How does this affect the description of phase transitions? NMP has non-trivial implications only for dark matter at MB since it NMP does favors preservation and even generation of negentropic entanlement (NE). Does NMP imply that MB plays a key role in all phase transitions?
  2. Does strong form of holography of TGD reduce all transitions in some sense to this kind of 2-D quantum critical phase transitions at fundamental level? Note that partonic 2-surfaces can be seen as carriers of effective magnetic charges and string world sheets carrying spinor modes accompany magnetic flux tubes. Could underlying conformal gauge symmetry and its change have practical implications for the description of all phase transitions, even 3-D and thermodynamical phase transitions?
  3. Could many-sheetedness of space-time - in particular the associated p-adic length scale hierarchy - be important and could one identify the space-time sheets whose dynamics controls the transition? Could the fundamental description in terms of quantum phase transitions relying on strong form of holography apply to all phase transitions? Could dark phases at MB be the key to the description of also ordinary thermodynamical phase transitions? Could one see dark MB as master and ordinary matter as slave and redue the description of all phase transitions to dark matter level.

    Could the change of heff for dark matter at field body accompany any phase transition - even thermodynamical - or only quantum critical phase transition at some level in the hierarchy of space-time sheets? Or are also phase transitions involving no change of heff possible? Do ordinary phase transitions correspond to these. What is the role of heff changing "transitons" and their dynamical symmetries?

  4. The huge vacuum degeneracy of Kähler action implies that any space-time surface with CP2 projection that is Lagrangian manifold and has therefore dimension not larger than two, is vacuum extremal. The small deformations of these vacuum extremals define preferred extremals. One expects that this vacuum degeneracy implies infinite number of ground states as in the case of spin glass (magnetized system consisting of regions with different direction of magnetization). One can speak of 4-D spin glass. It would seem that the hierarchy of Planck constants labelling different quantum phases and the phase transitions between these phases can be interpreted in terms of 4-D spin glass property? Besides phases one would have also phase transitions having "transitons" as building bricks.

    It seems that one cannot assign 4-D spin glass dynamics to MB. If magnetic flux tubes are carriers of monopole flux, they cannot be small local deformations of vacuum extremals for which Kähler form vanishes. Hence 4-D spin glass property can be assigned to flux tubes carrying vanishing magnetic flux. Early cosmology suggests that cosmic strings as infinitely flux tubes having 2-D CP2 projection and carrying monopole flux are deformed to magnetic flux tubes and suffer topological condensation around vacuum extremals and deform them during the TGD counterpart of inflationary period.

    Comment: Glass state looks like a transition rather than state and ZEO and 4-D spin glass description would seem to fit naturally to his situation: glass would be a 4-D variant of spin glass. The time scale of transition is long and one might think that heff at the space-time sheet "controlling" transition is rather large and also the change of heff is large.

Symmetries and phase transitions

The notion of symmetry is considerably more complex in TGD framework than in standard picture based on positive energy ontology. There are dynamical symmetries of dark matter states located at the boundaries of CD. For space-time sheets describing phase transitions there are also dynamical symmetries but they are different. In standard physics one has just states and their symmetries. Conformal gauge symmetries forming a hierarchy: conformal field theories this symmetry is maximal and the hierarchy is absent.

  1. There is importance and very delicate difference between thermal and thermodynamical symmetries. Thermal symmetries are due to thermal equilibrium implying symmetries in statistical sense. Quantal symmetries correspond to representations of symmetry group and are possible if thermal fluctuations do not transform the states of the representations the states of other representation.

    Dark dynamical symmetries are quantum symmetries. The breaking of thermal translational symmetry of liquid leads to discrete translational symmetry of crystal having interpretation as quantum symmetry. The generation of continuous thermal translational symmetry from discrete quantum symmetry means loss of quantum symmetry. To my opinion, standard thinking is sloppy here.

  2. For thermodynamical phase transitions temperature reduction induces spontaneous breaking of symmetry: consider only liquid-to-crystal transition. Analogously, in gauge theories the reduction gauge coupling strength leads to spontaneous symmetry breaking: quantum fluctuations combine representation of sub-group to a representation of larger group. It would seem that spontaneous symmetry breaking actually brings in a symmetry and the ubroken symmetry is "thermal" or pure gauge symmetry. QCD serves as an example: as strong coupling strength (analogous to temperature) becomes large confinement occurs and color symmetry becomes pure gauge symmetry.
  3. In TGD the new feature is that there are two kinds of symmetries for dark conformal hierarchies. Symmetries are either pure gauge symmetries or genuine dynamical symmetries affecting the dark state at field body physically. As heff increases, the conformal pure gauge symmetry is reduced (the conformal gauge algebra annilating the states becomes smaller) but dynamical symmetry associated with the degrees of freedom above measurement resolution increases. In ordinary conformal theories pure gauge conformal symmetry is always maximal so that this phenomenon does not occur.

    The intuitive picture is that the increase of dynamical symmetry induced by the reduction of pure gauge conformal symmetry occurs as temperature is lowered and quantum coherence in longer scales becomes possible. This conforms with the thermodynamical and gauge theory views if pure gauge symmetry is identified as counterpart of symmetry as it is understood in thermodynamics and gauge theories.

    The dynamical symmetry of dark matter however increases. This symmetry is something new and would be genuine quantum symmetry in the sense that quantum fluctuations respect the representations of this group. The increase of heff indeed implies reduction of Kähler coupling strength analogous to reduction of temperature so that these quantum symmetries can emerge.

  4. There is also a dynamical symmetry associated with phase transitions heff(f)=m× heff(i) such that m would define the rank of ADE Lie group G classifying states of "transitons". Lie groups with ranks ni and nf would be ranks for the Lie group G in the initial and final states. G would correspond to either gauge (not pure gauge) or Kac-Moody symmetry as also for corresponding dynamical symmetry groups associated with phases.
  5. An interesting question relates to Kosterlitz-Thouless Thouless phase transition, which is 2-D and for which symmetry is not changed. Could one interpret it as a phase transition changing heff for MB: symmetry group as abstract group would not change although the scale in which acts would change: this is like taking zoom. The dynamical symmetry group assignable to dark matter at flux tubes would however change but remain hidden.
To sum up, the notion of magnetic (field) body might apply even to the ordinary phase transitions. Dark symmetries - also discrete translational and rotational symmetries - would be assigned with dark MB possibly present also in ordinary phases. The dynamical symmetries of MB would bring a new element to the description. Ordinary phase transitions would be induced by those of MB. This would generalize the vision that MB controls biological body central for TGD view about living matter. In the spirit of slaving hierarchy and TGD inspired vision about quantum biology, ordinary matter would be slave and MB the master and the description of the phase transitions in terms of dynamics of master could be much more simpler than the standard description. This would be a little bit like understanding technical instrument from the knowledge of its function and from control level rather than from the mere physical structure.

For backbround see the article What's new in TGD inspired view about phase transitions? or the chapter Criticality and dark matter.



What ZEO can give to the description of criticality?

One should clarify what quantum criticality exactly means in TGD framework. In positive energy ontology the notion of state becomes fuzzy at criticality. It is difficult to assign long range fluctuations and associated quanta with any of the phases co-existent at criticality since they are most naturally associated with the phase change. Hence Zero Energy Ontology (ZEO) might show its power in the description of (quantum) critical phase transitions.

  1. Quantum criticality could correspond to zero energy states for which the value of heff differs at the opposite boundaries of causal diamond (CD). The space-time surface between boundaries of CD would describe the transition classically. If so, then quanta for long range fluctuations would be genuinely 4-D objects - "transitons" - allowing proper description only in ZEO. This could apply quite generally to the excitations associated with quantum criticality. Living matter is key example of quantum criticality and here "transitons" could be seen as building bricks of behavioral patterns. Maybe it makes sense to speak even about Bose-Einstein condensates of "transitons".
  2. Quantum criticality would be associated with the transition increasing neff=heff/h by integer factor m or its reversal. Large heff phases as such would not be quantum critical as I have sloppily stated in several contexts earlier. neff(f) =m × neff(i) would correspond to a phase having longer long range correlations as the initial phase. Maybe one could say that at the side of criticality (say the "lower" end of CD) the neff(f)=m × neff(i) excitations are pure gauge excitations and thus "below measurement resolution" but become real at the other side of criticality (the "upper" end of CD)? The integer m would have clear geometric interpretation: each sheet of ni-fold coverings defining space-time surface with sheets co-inciding at the other end of CD would be replaced with its m-fold covering. Several replications of this kind or their reversals would be possible.
  3. The formation of m-fold covering could be also interpreted in terms of an inclusion of hyper-finite factors labelled by integer m. This suggests a deep connection with symmetries of dark matter. Generalizing the McKay correspondence between finite subgroups of SU(2) characterizing the inclusions and ADE type Lie groups, the Lie group G characterizing the dynamical gauge group or Kac-Moody group for the inclusion of HFFs characterized by m would have rank given by m (the dimension of Cartan algebra of G).

    These groups are expected to be closely related to the inclusions for the fractal hierarchy of isomorphic sub-algebras of super-symplectic subalgebra. heff/h=n could label the sub-algebras: the conformal weights of sub-algebra are n-multiples of those of the entire algebra. If the sub-algebra with larger value of neff annihilates the states, it effectively acts as normal subgroup and one can say that the coset space of the two super-conformal groups acts either as gauge group or (perhaps more naturally) Kac-Moody group. The inclusion hierarchy would allow to realize all ADE groups as dynamical gauge groups or more plausibly, as Kac-Moody type symmetry groups associated with dark matter and characterizing the degrees of freedom allowed by finite measurement resolution.

  4. If would be natural to assign "transitons" with light-like 3-surfaces representing parton orbits between boundaries of CD. I have indeed proposed that Kac-Moody algebras are associated with parton orbits where super-symplectic algebra and conformal algebra of light-one boundary is associated with the space-like 3-surfaces at the boundaries of CD. This picture would provide a rather detailed view about symmetries of quantum TGD.
The number-theoretic structure of heff reducing transitions is of special interest.
  1. A phase characterized by heff/h=ni can make a phase transition only to a phase for which nf divides ni. This in principle allows purely physics based method of finding the divisors of very large integers (gravitational Planck constant hgr =GMm/v0=heff =n× h defines huge integer).
  2. In TGD inspired theory of consciousness a possible application is to a model for how people known as idiot savants unable to understand what the notion of prime means are able to decompose large integers to prime factors (see this). I have proposed that the division to prime factors is a spontaneous process analogous to the splitting of a periodic wave characterized by wave length λ/λ0=ni to a wave with wavelength λ/λ0 =nf with nf a divisor of ni. This process might be completely spontaneous sequence of phase transitions reducing the value of neff realized geometrically as the number of sheets of the singular covering defining the space-time sheet and somehow giving rise to a direct sensory percept.
For backbround see the article What's new in TGD inspired view about phase transitions? or the chapter Criticality and dark matter.



More about BMS supertranslations

Bee had a blog posting about the new proposal of Hawking, Perry and Strominger (HPS) to solve the blackhole information loss problem.

In the article Maxwellian electrodynamics is taken as a simpler toy example.

  1. One can assign to gauge transformations conserved charges. Gauge invariance tells that these charges vanish for all gauge transformations, which approach trivial transformation at infinity. Now however it is assumed that this need not happen. The assumption that action is invariant under these gauge transformations requires that the radial derivative of the function Φ defining gauge transformation approaches zero at infinity but gauge transformation can be non-trivial in the angle coordinates of sphere S2 at infinity. The allowance of these gauge transformations implies infinite number of conserved charges and QED is modified. The conserved gauge charges are generalizations of ordinary electric charged defined as electric fluxes (defining zero energy photons too) and reduce to electric gauge fluxes with electric field multiplied by Φ.
  2. For Maxwell's theory the ordinary electric charged defined as gauge flux must vanish. The coupling to say spinor fields changes the situation and due to the coupling the charge as flux is expressible in terms of fermionic oscillator operators and those of U(1) gauge field . For non-constant gauge transformations the charges are at least formally non-trivial even in absence of the coupling to fermions and linear in quantized U(1) gauge field.
  3. Since these charges are constants of motion and linear in bosonic oscillator operators, they create or annihilate gauge bosons states with vanishing energy: hence the term soft hair. Holographists would certainly be happy since the charges could be interpreted as representing pure information. If one considers only the part of charge involving annhilation operators one can consider the possibility that in quantum theory physical states are eigenstates of these "half charges" and thus coherent states which are the quantum analogs of classical states. Infinite vacuum degeneracy would be obtained since one would have infinite number of coherent states labelled by the values of the annihilation operator parts of the charges. A situation analogous to conformal invariance in string models is obtained if all these operators either annihilate the vacuum state or create zero energy state.
  4. If these U(1) gauge charges create new ground states they could carry information about matter falling into blackhole. Particle physicist might protest this assumption but one cannot exclude it. It would mean generalization of gauge invariance to allow gauge symmetries of the proposed kind. What distinguishes U(1) gauge symmetry from non-Abelian one is that fluxes are well-defined in this case.
  5. In the gravitational case the conformal transformations of the sphere at infinity replace U(1) gauge transformations. Usually conformal invariance would requite that almost all conformal charges vanish but now one would not assume this. Now physical states would be eigentates of annihilation operator parts of Virasoro generators Ln and analogous to coherent states and code for information about the ground state. In 4-D context interpretation as strong form of holography would make sense.The critical question is why should one give up conformal invariance as gauge symmetry in the case of blackholes.
It is is interesting to look TGD analogy for BMS supertranslation symmetries. Not for solving problems related to blackholes - TGD is not plagued by these problems - but because the analogs of these symmetries are very important in TGD framework.
  1. In TGD framework conformal transformations of boundary of CD correspond to the analogs of BMS transformations. Actually conformal transformations of not only sphere (with constant value of radial coordinate labeling points of light rays emerging from the tip of the light-cone boundary) but also in radial degrees of freedom so that conformal symmetries generalize. This happens only in case of 4-D Minkowski space and also for the light-like 3-surfaces defining the orbits of partonic 2-surfaces. One actually obtains a huge generalization of conformal symmetries. As a matter of fact, Bee wondered whether the information related to radial degrees of freedom is lost: one might argue that holography eliminates them.
  2. Amusingly, one obtains also the analogs of U(1) gauge transformations in TGD! In TGD framework symplectic transformations of light-cone boundary times CP2 act like U(1) gauge transformations but are not gauge symmetries for Kähler action except for vacuum extremals! This is assumed in the argument of the article to give blackhole its soft hair but without any reasonable justification. One can assign with these symmetries infinite number of non-trivial conserved charges: super-symplectic algebra plays a fundamental role in the construction of the geometry of "World of Classical Worlds" (WCW).

    At imbedding space level the counterpart for the sphere at infinity in TGD with the sphere at which the lightcone-boundaries defining the boundary of causal diamond (CD) intersect. At the level of space-time surfaces the light-like orbits of partonic 2-surfaces at which the signature of the induced metric changes are the natural counterparts of the 3-surface at infinity.

    In TGD framework Noether charges vanish for some subalgebra of the entire algebra isomorphic to it and one obtains a hierarchy of quantum states (infinite number of hierarchies actually) labelled by an integer identifiable in terms of Planck constant heff/h=n. If colleagues managed to realize that BMS has a huge generalization in the situation when space-times are surface in M4×CP2, floodgates would be open.

    One obtains a hierarchy of breakings of superconformal invariance, which for some reason has remained un-discovered by string theorists. The natural next discovery would be that one indeed obtains this kind of hierarchy by demanding that conformal gauge charges still vanish for a sub-algebra isomorphic with the original one. Interesting to see who will make the discovery. String theorists have failed to realize also the completely unique aspects of generalized conformal invariance at 3-D light-cone boundary raising dimension D=4 to a completely unique role. To say nothing about the fact that M4 and CP2 are twistorially completely unique. I would continue the list but it seems that the emergence super string elite has made independent thinking impossible, or at least the communications of the outcomes of independent thinking.

Does one obtain the analogs of generalized gauge fluxes for Kähler action in TGD framework?
  1. The first thing to notice is that Kähler gauge potentials are not the primary dynamical variables. This role is taken by the imbedding space coordinates. The symplectic transformations of CP2 act like gauge transformations mathematically but affect the induced metric so that Kähler action does not remain invariant. The breaking is small due to the weakness of the classical gravitation. Indeed, if symplectic transformations are to define isometries of WCW, they cannot leave Kähler action invariant since the Kähler metric would be trivial! One can deduce symplectic charges as Noether charges and they might serve as analogs fo the somewhat questionable generalized gauge charges in HPS proposal.
  2. If the counterparts of the gauge fluxes make sense they must be associated with partonic 2-surfaces serving as basic building bricks of elementary particles. Field equations do not follow from independent variations of Kähler gauge potential but from that of imbedding space coordinates. Hence identically conserved Kähler current does not vanish for all extremals. Indeed, so called massless extremals ( MEs) can carry a non-vanishing light-like Kähler current, whose direction in the general case varies. MEs are analogous to laser beams and if the current is Kähler charged it means that one has massless charged particle.
  3. Since Kähler action is invariant also under ordinary gauge transformations one can formally derive the analog of conserved gauge charge for non-constant gauge transformation Φ. The question is whether this current has any physical meaning.

    One obtains current as contraction of Kähler form and gradient of Φ:

    jαΦ= JαββΦ ,

    which is conserved only if Kähler current vanishes so that Maxwell's equations are true or if the contraction of Kähler current with gradient of Φ vanishes:

    jαΦαΦ=0 .

    The construction of preferred extremals leads to the proposal that the flow lines of Kähler current are integrable in the sense that one can assign a global coordinate Ψ with them. This means that Kähler current is proportional to gradient of Ψ:

    jαΦ= gαββΨ .

    This implies that the gradients of Φ and Ψ are orthogonal. If Kähler current is light-like as it is for the known extremals, Φ is superposition of light-like gradient of Ψ and of two gradients in a sub-space of tangent space analogous to space of two physical polarizations. Essentially the local variant of the polarization-wave vector geometry of the modes of radiative solutions of Maxwell's equations is obtained. What is however important that superposition is possible only for modes with the same local direction of wave vector (∇Ψ) and local polarization.

    Kähler current would be scalar function k times gradient of Ψ :

    jαΦ= kgαββΨ .

    The proposal for preferred extremals generalizing at least MEs leads to the proposal that the extremals define two light-like coordinates and two transversal coordinates.

  4. The conserved current decomposes to a sum of interior and boundary terms. Consider first the boundary term. The boundary contributions to the generalized gauge charge is given by the generalized fluxes

    Qδ,Φ= ∮ JtnΦ g1/2

    over partonic 2-surfaces at which the signature of the induced metric changes from Euclidian to Minkowskian. These contributions come from both sides of partonic 2-surface corresponding to Euclidian and Minkowskian metric and they differ by a imaginary unit coming from g1/2 at the Minkoskian side. Qδ,Φ could vanish since g1/2 approaches zero because the signature of the induced metric changes at the orbit of the partonic 2-surfaces. What happens depends on how singular the electric component of gauge potential is allow to be. Weak form of electric magnetic duality proposed as boundary condition implies that the electric flux reduces to magnetic flux in which case the result would be magnetic flux weighted by Φ.

  5. Besides this there is interior contribution, which is Kähler current multiplied by -Φ:

    Qint,Φ= ∫ jtΦ g1/2 .

    This contribution is present for MEs.

  6. Could one interpret these charges as genuine Noether charges? Maybe! The charges seem to have physical meaning and they depend on extremals. The functions Φ could even have some natural physical interpretation. The modes of the induced spinor fields are localized at string world sheets by strong form of holography and by the condition that electric charge is well defined notion for them. The modes correspond to complex scalar functions analogous to powers zn associated with the modes of conformal fields. Maybe the scalar functions could be assigned to the second quantized fermions. Note that one cannot interpret these contributions in terms of oscillator operators since the second quantization of the induced gauge fields does not make sense. This would conform with strong form of holography which in TGD framework sense that the descriptions in terms of fundamental fermions and in terms of classical dynamics of Kähler action are dual. This duality suggest that the quantal variants of generalized Kähler charges are expressible in terms of fermionic oscillator operators generating also bosonic states as analogs of bound states. The generalized charge eigenstates might be also seen as analogs of coherent states.
See the article TGD view about blackholes and Hawking radiation or the chapter Criticality and dark matter.



Solution of the Ni62 mystery of Rossi's E-Cat

In my blog a reader calling himself Axil made a highly interesting comment. He told that in the cold fusion ashes from Rossi's E-Cat there is 100 micrometer sized block containing almost pure Ni62 isotope. This is one of Ni isotopes but not the lightest Ni58 whose isotope fraction 67.8 per cent. Axil gave a link providing additional information and I dare to take the freedom to attach it here.

Ni62 finding looks really mysterious. One interesting finding is that the size 100 micrometers of the Ni62 block corresponds to the secondary p-adic length scale for W bosons. Something deep? Let us however forget this clue.

One can imagine all kinds of exotic solutions but I guess that it is the reaction kinetics "dark fusion + subsequent ordinary fusion repeated again and again", which leads to a fixed point, which is enrichment by Ni62 isotope. This is like iteration. This guess seems to work!

  1. The reaction kinematics in the simplest case involves three elements.
    1. The addition of protons to stable isotopes of Ni. One can add N=1,2,... protons to the stable isotope of Ni to get dark nuclear string NiX+N protons. As such these are not stable by Coulomb repulsion.
    2. The allowed additional stabilising reactions are dark W boson exchanges, which transfer charge between separate dark nuclear strings at flux tubes. Beta decays are very slow processes since outgoing W boson decaying to electron and neutrino is very massive. One can forget them. Therefore dark variants of nuclei decaying by beta decay are effectively stable.
    3. The generation of dark nuclei and their transformation to ordinary nuclei occurs repeatedly. Decay products serve as starting point at the next round. One starts from stable isotopes of NiX, X=58, 60, 61, 62, 64 and adds protons some of which can by dark W exchange transform to neutrons. The process produces from isotope NiX heavier isotopes NiY, Y= X+1, X+2,.. plus isotopes of Zn with Z=30 instead of 28, which are beta stable in the time scale considered. Let us forget them.
  2. The key observation is that this iterative kinematics increases necessarily mass number!! The first guess is that starting from say X=58 one unavoidably ends up to the most massive stable isotope of Ni! The problem is however that Ni62 is not the heaviest stable isotope of Ni: it is Ni64!! Why the sequence does not continue up to Ni64?

    The problem can be solved. The step Ni62→Ni62+p leads to Cu63, which is the lightest stable isotope of Copper. No beta exchanges anymore and the iteration stops! It works!

  3. But how so huge pieces of Ni62 are possible? If dark W bosons are effectively massless only below atomic length scale - the minimal requirement - , one cannot expect pieces to be much larger than atomic length scale. Situation changes if the Planck constant for dark weak interactions is so large that the scaled up weak scale corresponds to secondary p-adic length scale. This requires heff/h≈ 245≈3.2 × 1013. The values of Planck constant in TGD inspired mode of living matter are of this order of magnitude and imply that 10 Hz EEG photons have energies in visible and UV range and can transform to ordinary photons identifiable as bio-photons ideal for the control of bimolecular transitions! 100 micrometers in turn is the size scale of large neuron! So large value of heff/h would also help to understand why large breaking of parity symmetry realized as chiral selection is possible in cellular length scales.
Clearly, this kind of fixed point dynamics is the unique feature of the proposed dark fusion dynamics and provides an easily testable prediction of TGD based model. Natural isotope fractions are not produced. Rather, the heaviest stable isotope dominates unless there is lighter stable isotope which gives rise to stable isotope by addition of proton.

See the article Cold Fusion Again or the chapter with the same title.



To the index page