What's new in

HYPER-FINITE FACTORS, P-ADIC LENGTH SCALE HYPOTHESIS, AND DARK MATTER HIERARCHY

Note: Newest contributions are at the top!



Year 2015

Could electrolysis involve dark matter and new physics?

During years I have many times tried to understand what happens in electrolysis and every time I have been forced to to admit that I do not! Very embarrassing observation. I have tried to gain wisdom from an old chemistry book with 1000 pages again and again but always in vain. This is especially embarrassing because a unified theory builder to be taken seriously is expected to build brave new brane worlds in 11 or 12 dimensions to possibly explain a possible detected particle at mass 750 GeV at LHC instead of trying to understand age old little problems solved aeons ago. The wau-coefficient of chemistry is zero as compared to the awesome 10500 of M-theory.

Energetics has been my personal problem (besides funding). I learn from chemistry book that an electric field - say voltage of 2 V per 1 mm splits molecules to ions. The bond energies of molecules are in few eV range. For instance, O-H bond has 5 eV energy. V= 2V/mm electric field corresponds to electrostatic energy E=eVd∼ 2-10 eV energy gain for a unit charge moving from the end of the bond to the other one. This is incredibly small energy and to my understanding should have absolutely no effect to the state molecule. Except that it has!

A heretic thought: could it be that chemists have just accepted this fact (very reasonable!) and built their models as mathematical parameterizations without any attempt to understand what really happens? Could the infinite vanity of theoretical physicists have prevented them from lowering themselves to the intellectual level of chemists and prevented them from seeing that electrolysis is not at all understood?

In order that this kind of energy would have so drastic effect as splitting molecule to pieces, the system molecule + yet unidentified "something" must be in critical state. Something at the top of hill so that even slightest perturbation makes it fall down. The technical term is criticality or even quantum criticality.

  1. Biological systems are critical systems extremely sensitive to small changes. Criticality means criticality against molecular ionization - charge separation basically. Also in electrolysis this criticality is present. Both DNA and cell are negatively charged. Inside cells there are various kinds of ions. In TGD Universe all matter is quantum critical.
  2. Charge separation occurs also in Pollack's experiments in which the fourth phase of water is generated. This phase contains negatively charged regions with effective H1.5O stoichiometry (hydrogen bonded state of two water molecules which has lost proton). Positive charge associated with lost protons has gone outside these regions.
What produces quantum criticality against charge separation? What is this unidentified "something" besides the system? Magnetic body carrying dark matter! This is the answer in TGD Universe. The TGD inspired model assumes that the protons transform to dark protons at dark magnetic flux tubes possibly carrying monopole flux. If these protons form dark nuclei the liberated dark nuclear energy can split further O-H bonds and transform protons to dark phase. The energy needed is about 5 eV and is in the nuclear binding energy scale scaling as 1/heff (like distance) if the size scale of dark protons proportional to heff/h is 1 nm. One would have heff/h≈ 106: the size scale of DNA codons - not an accident in TGD Universe. The liberated dark nuclear energy can ionize other molecules such as KOH, NaOH, HCl, Ca(OH)2, CaO,...

Entire spectrum of values of heff/h is possible. For laser pulse induced fusion (see the article ) assumed to induce longitudinal compression one would have heff/h≈ 103. Dark nuclear physics with non-standard values of Planck constant would be a crucial element of electrolysis. Condensed matter physics and nuclear physics would not live in totally separate compartments and dark matter an ordinary matter would interact! How humiliating for theoreticians! I do not hear the derisive laughter of superstring theoreticians anymore!

Ordinary electrolysis would thus produce dark nuclei. The problem is that most of them would leak out from the system along dark flux tubes and potentially available nuclear energy is lost! As also various elements so badly needed by modern techno-society! For instance, in the splitting of water to hydrogen, the flux tubes assignable to the beam containing hydrogen would take the dark nuclei away. Could one transform dark nuclei to ordinary ones?

  1. If this beam collides with say metal target, some fraction of the dark nuclei could however transform to ordinary nuclei and liberate really huge energy: the difference between nuclear binding energies of initial and finals state would be essentially that of the final state unlike in ordinary nuclear fusion.
  2. In particular, electrodes could induce transformation of the dark nuclei to ordinary ones. Even in the experiments of Pons and Fleichman the role of porous Pd target could be secondary: it would be only a target allowing the dark nuclei produced by bubble fusion to transform to ordinary nuclei and the large surface area would help in this respect. Same applies to Rossi's E-Cat.
  3. So called Brown's gas generated in the splitting of water is claimed to be able to melt metals although its temperature is relatively low- around 100 Celsius. The claims is of course taken not seriously by a "serious" scientists as the Wikipedia article so clearly demonstrates. It could be however understood if the melting is caused the transformation of dark nuclei to ordinary ones. The corrosion of the metallic surface in the presence of cavitating water would be also due to the dark nuclear energy. Not all of the energy would be used to produce corrosive effects, and I have in some discussions been told that in electric plants an anomalous production of energy assignable to corrosive effects in turbine has been observed. Electric plants could have served secretly as dark nuclear plants! Unfortunately, I do not have reference to this claim. TGD inspired model for it article later affects aluminium disks inside cavitating water corrosively: LeClair (LeClair effect is discussed here ) might have reinvented Brown's gas!

    But why metals? The surface of metal in external electric field carries negative charge density of conduction electrons. Could it be that they attract the positively charged dark nuclei from the magnetic flux tubes back to the visible world, and help them to tranform back to ordinary nuclei? Conductors in electric fields would thus help to transform dark nuclei to ordinary matter.

  4. Brown's gas is reported to have no effect on living matter? Why? If living matter uses dark nuclear physics as a basic tool, it should have developed tools to avoid the transformation of dark nuclei to ordinary nuclei in uncontrollable manner. What aspect of quantum biophysics could make this possible? Negentropy Maximization Principle defining the basic variational principle of TGD inspired theory of consciousness could be the general reason preventing this transformation (see this). The negentropy characterizing negentropic entanglement serving as a measure for potentially conscious information assignable to non-standard values of heff would be reduced if heff is reduced. But how to understand this at a more detailed level? Could the fact that bio-molecules are mostly insulators rather than electronic conductors explain this?

See the article Cold Fusion Again or the chapter with the same title.



Cold fusion again

During years I have developed two models of cold fusion and in this chapter these models are combined together. The basic idea of TGD based model of cold is that cold fusion occurs in two steps. First dark nuclei (large heff=n× h) with much lower binding energy than ordinary nuclei are formed at magnetic flux tubes possibly carrying monopole flux. These nuclei can leak out the system along magnetic flux tubes. Under some circumstances these dark nuclei can transform to ordinary nuclei and give rise to detectable fusion products.

An essential additional condition is that the dark protons can decay to neutrons rapidly enough by exchanges of dark weak bosons effectively massless below atomic length scale. This allows to overcome the Coulomb wall and explains why final state nuclei are stable and the decay to ordinary nuclei does not yield only protons. Thus it seems that this model combined with the TGD variant of Widom-Larsen model could explain nicely the existing data.

I will describe the steps leading to the TGD inspired model for cold fusion combining the earlier TGD variant of Widom-Larsen model with the model inspired by the TGD inspired model of Pollack's fourth phase of water using as input data findings from laser pulse induced cold fusion discovered by Leif Holmlid and collaborators. I consider briefly also alternative options (models assuming surface plasma polariton and heavy electron). After that I apply TGD inspired model in some cases (Pons-Fleischman effect, bubble fusion, and LeClair effect). The model explains the strange findings about cold fusion - in particular the fact that only stable nuclei are produced - and suggests that also ordinary nuclear reactions might have more fundamental description in terms of similar model.

See the article Cold Fusion Again or the chapter with the same title.



Analogs of quantum matrix groups from finite measurement resolution?

The notion of quantum group replaces ordinary matrices with matrices with non-commutative elements. This notion is physically very interesting, and in TGD framework I have proposed that it should relate to the inclusions of von Neumann algebras allowing to describe mathematically the notion of finite measurement resolution (see this). These ideas have developed slowly through various side tracks.

In it is interesting to consider the notion of quantum matrix inspired by recent view about quantum TGD. It turns out that under some additional conditions this approach provides a concrete representation and physical interpretation of quantum groups in terms of finite measurement resolution.

  1. The basic idea is to replace complex matrix elements with operators, which are products of non-negative hermitian operators and unitary operators analogous to the products of modulus and phase as a representation for complex numbers.
  2. The condition that determinant and sub-determinants exist is crucial for the well-definedness of eigenvalue problem in the generalized sense. The weak definition of determinant meaning its development with respect to a fixed row or column does not pose additional conditions. Strong definition of determinant requires its invariance under permutations of rows and columns. The permutation of rows/columns turns out to have interpretation as braiding for the hermitian operators defined by the moduli of operator valued matrix elements: this should lead to quantum commutativity as realized in braid groups with rows and columns appearing taking the role of braids.
  3. The commutativity of all sub-determinants is essential for the replacement of eigenvalues with eigenvalue spectra of hermitian operators and sub-determinants define mutually commuting set of operators.

Quantum matrices define a more general structure than quantum group but provide a concrete representation for them in terms of finite measurement resolution if q is a root of unity. For q=+/- 1 (Bose-Einstein or Fermi-Dirac statistics) one obtains quantum matrices for which the determinant is apart from possible change by sign factor invariant under the permutations of both rows and columns. One can also understand the recursive fractal structure of inclusion sequences of hyper-finite factors resulting by replacing operators appearing as matrix elements with quantum matrices and a concrete connection with quantum groups emerges.

In Zero Energy Ontology (ZEO) M-matrix serving as the basic building brick of unitary U-matrix and identified as a hermitian square root of density matrix provides a possible application for this vision. Especially fascinating is the possibility of hierarchies of measurement resolutions represented as inclusion sequences realized as recursive construction of M-matrices. Quantization would emerge already at the level of complex numbers appearing as M-matrix elements.

This approach might allow to unify various ideas behind TGD. For instance, Yangian algebras emerging naturally in twistor approach are examples of quantum algebras. The hierarchy of Planck constants should have a close relationship with inclusions and fractal hierarchy of sub-algebras of super-symplectic and other conformal algebras.

See the article Analogs of quantum matrix groups from finite measurement resolution? or the chapter Evolution of Ideas about Hyper-finite Factors in TGD.



Evidence for macroscopic quantum coherence of fluid flow at criticality

Evidence for the hierarchy of Planck constants implying macroscopic quantum coherence in quantum critical systems is rapidly accumulating. Also people having the courage to refer to TGD in their articles are gradually emerging. The most recent fluid dynamics experiment providing this kind of evidence is performed by Yves Couder and Emmanuel Fort (see for instance the article Single particle diffraction in macroscopic scale). Mathematician John W. M. Bush has commented these findings in the Proceedings of National Academy of Sciences and the article provides references to a series of papers by Couder and collaborators.

The system studied consist of a tray containing water at a surface, which is oscillating. The intensity of vibration is just below the critical value inducing so called Faraday waves at the surface of water. Although the water surface is calm, water droplet begins to bounce and generates waves propagating along the water surface - "walkers". Walkers behave like classical particles at Bohr orbits. As they pass through a pair of slits they behave they choose random slit but several experiments produce interference pattern. Walkers exhibit an effect analogous to quantum tunneling and even the analogs of quantum mechanical bound states of walkers realized as circular orbits emerge as the water tray rotates!

The proposed interpretation of the findings is in terms of Bohm's theory. Personally I find it very difficult to believe in this since Bohm's theory has profound mathematical difficulties. Bohm's theory was inspired by Einstein's belief on classical determinism and the idea that quantum non-determinism is not actual but reduces to the presence of hidden variables. Unfortunately, this idea led to no progress.

TGD is analogous to Bohm's theory in that classical theory is exact but quantum theory is now only an exact classical correlate: there is no attempt to eliminate quantum non-determinism. Quantum jumps are between superpositions of entire classical time evolutions rather than their time=constant snapshots: this solves the basic paradox of Copenhagen interpretation. A more refined formulation is in terms of zero energy ontology, which in turn forces to generalize quantum measurement theory to a theory of consciousness.

Macroscopic quantum coherence associated with the behavior of droplets bouncing on the surface of water is suggested by the experiments. For instance, quantum measurement theory seems to apply to the behavior of single droplet as it passes through slit. In TGD the prerequisite for macroscopic quantum coherence would be quantum criticality at which large heff=n×h is possible. There indeed is an external oscillation of the tray containing water with an amplitude just below the criticality for the generation of Faraday waves at the surface of water. Quantum classical correspondence states that the quantum behavior should have a classical correlate. The basic structure of classical TGD is that of hydrodynamics in the sense that dynamics reduces to conservation laws plus conditions expressing the vanishing of an infinite number of so called super-symplectic charges - the conditions guarantee strong form of holography and express quantum criticality. The generic solution of classical field equations could reduce to Frobenius integrability conditions guaranteing that the conserved isometry currents are integrable and thus define global coordinates varying along the flow lines.

One should be of course very cautious. For ordinary Schrödinger equation the system is closed. Now the system is open. This is not a problem if the only function of external vibration is to induce quantum criticality. The experiment brings in mind the old vision of Frölich about external vibrations as induced of what looks like quantum coherence. In TGD framework this coherence would be forced coherence at the level of visible matter but the oscillation itself would correspond to genuine macroscopic quantum coherence and large value of heff. A standard example are penduli, which gradually start to oscillate in unisono in presence of weak synchronizing signal. In brain neurons would start to oscillator synchronously by the presence of dark photons with large heff.

See the chapter Criticality and dark matter.



TGD inspired view about blackholes and Hawking radiation: part I

The most recent revealation of Hawking was in Hawking radiation conference held in KTH Royal Institute of Technology in Stockholm. The title of the posting of Bee telling about what might have been revealed is "Hawking proposes new idea for how information might escape from black holes". Also Lubos has - a rather aggressive - blog post about the talk. A collaboration of Hawking, Andrew Strominger and Malcom Perry is behind the claim and the work should be published within few months.

The first part of posting gives a critical discussion of the existing approach to black holes and Hawking gravitation. The intention is to demonstrate that a pseudo problem following from the failure of General Relativity below black hole horizon is in question.

In the second past of posting I will discuss TGD view about blackholes and Hawking radiation. There are several new elements involved but concerning black holes the most relevant new element is the assignment of Euclidian space-time regions as lines of generalized Feynman diagrams implying that also blackhole interiors correspond to this kind of regions. Negentropy Maximization Principle is also an important element and predicts that number theoretically defined black hole negentropy can only increase. The real surprise was that the temperature of the variant of Hawking radiation at the flux tubes of proton Sun system is room temperature! Could TGD variant of Hawking radiation be a key player in quantum biology?

Is information lost or not in blackhole collapse?

The basic problem is that classically the collapse to blackhole seems to destroy all information about the matter collapsing to the blackhole. The outcome is just infinitely dense mass point. There is also a theorem of classical GRT stating that blackhole has no hair: blachole is characterized only by few conserved charges.

Hawking has predicted that blackhole loses its mass by generating radiation, which looks like thermal. As blackhole radiates its mass away, all information about the material which entered to the blackhole seems to be lost. If one believes in standard quantum theory and unitary evolution preserving the information, and also forgets the standard quantum theory's prediction that state function reductions destroy information, one has a problem. Does the information really disappear? Or is the GRT description incapable to cope with the situation? Could information find a new representation?

Superstring models and AdS/CFT correspondence have inspired the proposal that a hologram results at the horizon and this hologram somehow catches the information by defining the hair of the blackhole. Since the radius of horizon is proportional to the mass of blackhole, one can however wonder what happens to this information as the radius shrinks to zero when all mass is Hawking radiated out.

What Hawking suggests is that a new kind of symmetry known as super-translations - a notion originally introduced by Bondi and Metzner - could somehow save the situation. Andrew Strominger has recently discussed the notion. The information would be "stored to super-translations". Unfortunately this statement says nothing to me nor did not say to Bee and New Scientist reporter. The idea however seems to be that the information carried by Hawking radiation emanating from the blackhole interior would be caught by the hologram defined by the blackhole horizon.

Super-translation symmetry acts at the surface of a sphere with infinite radius in asymptotically flat space-times looking like empty Minkowski space in very distant regions. The action would be translations along sphere plus Poincare transformations.

What comes in mind in TGD framework is conformal transformations of the boundary of 4-D lightcone, which act as scalings of the radius of sphere and conformal transformations of the sphere. Translations however translate the tip of the light-cone and Lorentz transformations transform the sphere to an ellipsoid so that one should restrict to rotation subgroup of Lorentz group. Besides this TGD allows huge group of symplectic transformations of δ CD× CP2 acting as isometries of WCW and having structure of conformal algebra with generators labelled by conformal weights.

Sharpening of the argument of Hawking

There is now a popular article explaining the intuitive picture behind Hawking's proposal. The blackhole horizon would involve tangential flow of light and particles of the infalling matter would induce supertranslations on the pattern of this light thus coding information about their properties to this light. After that this light would be radiated away as analog of Hawking radiation and carry out this information.

The objection would be that in GRT horizon is no way special - it is just a coordinate singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why light would rotate around it? No reason for this! The answer in TGD framework would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!

What are the problems?

My fate is to be an aggressive dissident listened by no-one, and I find it natural to continue in the role of angry old man. Be cautious, I am arrogant, I can bite, and my bite is poisonous!

  1. With all due respect to Big Guys, to me the problem looks like a pseudo problem caused basically by the breakdown of classical GRT. Irrespective of whether Hawking radiation is generated, the information about matter (apart from mass, and some charges) is lost if the matter indeed collapses to single infinitely dense point. This is of course very unrealistic and the question should be: how should we proceed from GRT.

    Blackhole is simply too strong an idealization and it is no wonder that Hawking's calculation using blackhole metric as a background gives rise to blackbody radiation. One might hope that Hawking radiation is genuine physical phenomenon, and might somehow carry the information by being not genuinely thermal radiation. Here a theory of quantum gravitation might help. But we do not have it!

  2. What do we know about blackholes? We know that there are objects, which can be well described by the exterior Schwartschild metric. Galactic centers are regarded as candidates for giant blackholes. Binary systems for which another member is invisible are candidates for stellar blackholes. One can however ask wether these candidates actually consist of dark matter rather than being blackholes. Unfortunately, we do not understand what dark matter is!
  3. Hawking radiation is extremely weak and there is no experimental evidence pro or con. Its existence assumes the existence of blackhole, which presumably represents the failure of classical GRT. Therefore we might be seeing a lot of trouble and inspired heated debates about something, which does not exist at all! This includes both blackholes, Hawking radiation and various problems such as firewall paradox.
There are also profound theoretical problems.
  1. Contrary to the intensive media hype during last three decades, we still do not have a generally accepted theory of quantum gravity. Super string models and M-theory failed to predict anything at fundamental level, and just postulate effective quantum field theory limit, which assumes the analog of GRT at the level of 10-D or 11-D target space to define the spontaneous compactification as a solution of this GRT type theory. Not much is gained.

    AdS/CFT correspondence is an attempt to do something in absence of this kind of theory but involves 10- or 11- D blackholes and does not help much. Reality looks much simpler to an innocent non-academic outsider like me. Effective field theorizing allows intellectual laziness and many problems of recent day physics will be probably seen in future as being caused by this lazy approach avoiding attempts to build explicit bridges between physics at different scales. Something very similar has occurred in hadron physics and nuclear physics and one has kind of stable of Aigeias to clean up before one can proceed.

  2. A mathematically well-defined notion of information is lacking. We can talk about thermodynamical entropy - single particle observable - and also about entanglement entropy - basically a 2-particle observable. We do not have genuine notion of information and second law predicts that the best that one can achieve is no information at all!

    Could it be that our view about information as single particle characteristic is wrong? Could information be associated with entanglement and be 2-particle characteristic? Could information reside in the relationship of object with the external world, in the communication line? Not inside blackhole, not at horizon but in the entanglement of blackhole with the external world.

  3. We do not have a theory of quantum measurement. The deterministic unitary time evolution of Schrödinger equation and non-deterministic state function reduction are in blatant conflict. Copenhagen interpretation escapes the problem by saying that no objective reality/realities exist. Easy trick once again! A closely related Pandora's box is that experienced time and geometric time are very different but we pretend that this is not the case.

    The only way out is to bring observer part of quantum physics: this requires nothing less than quantum theory of consciousness. But the gurus of theoretical physics have shown no interest to consciousness. It is much easier and much more impressive to apply mechanical algorithms to produce complex formulas. If one takes consciousness seriously, one ends up with the question about the variational principle of consciousness. Yes, your guess was correct! Negentropy Maximization Principle! Conscious experience tends to maximize conscious information gain. But how information is represented?

In the second part I will discuss TGD view about blackholes and Hawking radiation.

See the chapter Criticality and dark matter or the article TGD view about black holes and Hawking radiation.



TGD inspired view about blackholes and Hawking radiation: part II

TGD view about black holes and Hawking radiation: part II

In the second part of posting I discuss TGD view about blackholes and Hawking radiation. There are several new elements involved but concerning black holes the most relevant new element is the assignment of Euclidian space-time regions as lines of generalized Feynman diagrams implying that also blackhole interiors correspond to this kind of regions. Negentropy Maximization Principle is also an important element and predicts that number theoretically defined black hole negentropy can only increase. The real surprise was that the temperature of the variant of Hawking radiation at the flux tubes of proton Sun system is room temperature! Could TGD variant of Hawking radiation be a key player in quantum biology?

The basic ideas of TGD relevant for blackhole concept

My own basic strategy is to not assume anything not necessitated by experiment or not implied by general theoretical assumptions - these of course represent the subjective element. The basic assumptions/predictions of TGD relevant for the recent discussion are following.

  1. Space-times are 4-surfaces in H=M4× CP2 and ordinary space-time is replaced with many-sheeted space-time. This solves what I call energy problem of GRT by lifting gravitationally broken Poincare invariance to an exact symmetry at the level of imbedding space H.

    GRT type description is an approximation obtained by lumping together the space-time sheets to single region of M4, with various fields as sums of induced fields at space-time surface geometrized in terms of geometry of H.

    Space-time surface has both Minkowskian and Euclidian regions. Euclidian regions are identified in terms of what I call generalized Feynman/twistor diagrams. The 3-D boundaries between Euclidian and Minkowskina regions have degenerate induced 4-metric and I call them light-like orbits of partonic 2-surfaces or light-like wormhole throats analogous to blackhole horizons and actually replacing them. The interiors of blackholes are replaced with the Euclidian regions and every physical system is characterized by this kind of region.

    Euclidian regions are identified as slightly deformed pieces of CP2 connecting two Minkowskian space-time regions. Partonic 2-surfaces defining their boundaries are connected to each other by magnetic flux tubes carrying monopole flux.

    Wormhole contacts connect two Minkowskian space-time sheets already at elementary particle level, and appear in pairs by the conservation of the monopole flux. Flux tube can be visualized as a highly flattened square traversing along and between the space-time sheets involved. Flux tubes are accompanied by fermionic strings carrying fermion number. Fermionic strings give rise to string world sheets carrying vanishing induced em charged weak fields (otherwise em charge would not be well-defined for spinor modes). String theory in space-time surface becomes part of TGD. Fermions at the ends of strings can get entangled and entanglement can carry information.

  2. Strong form of General Coordinate Invariance (GCI) states that light-like orbits of partonic 2-surfaces on one hand and space-like 3-surfaces at the ends of causal diamonds on the other hand provide equivalent descriptions of physics. The outcome is that partonic 2-surfaces and string world sheets at the ends of CD can be regarded as basic dynamical objects.

    Strong form of holography states the correspondence between quantum description based on these 2-surfaces and 4-D classical space-time description, and generalizes AdS/CFT correspondence. Conformal invariance is extended to the huge super-symplectic symmetry algebra acting as isometries of WCW and having conformal structure. This explains why 10-D space-time can be replaced with ordinary space-time and 4-D Minkowski space can be replaced with partonic 2-surfaces and string world sheets. This holography looks very much like the one we are accustomed with!

  3. Quantum criticality of TGD Universe fixing the value(s) of the only coupling strength of TGD (Kähler coupling strength) as analog of critical temperature. Quantum criticality is realized in terms of infinite hierarchy of sub-algebras of super-symplectic algebra actings as isometries of WCW, the "world of classical worlds" consisting of 3-surfaces or by holography preferred extremals associated with them.

    Given sub-algebra is isomorphic to the entire algebra and its conformal weights are n≥ 1-multiples of those for the entire algebra. This algebra acts as conformal gauge transformations whereas the generators with conformal weights m<n act as dynamical symmetries defining an infinite hierarchy of simply laced Lie groups with rank n-1 acting as dynamical symmetry groups defined by Mac-Kay correspondence so that the number of degrees of freedom becomes finite. This relates very closely to the inclusions of hyper-finite factors - WCW spinors provide a canonical representation for them.

    This hierarchy corresponds to a hierarchy of effective Planck constants heff=n× h defining an infinite number of phases identified as dark matter. For these phases Compton length and time are scale up by n so that they give rise to macroscopic quantum phases. Super-conductivity is one example of this kind of phase - charge carriers could be dark variants of ordinary electrons. Dark matter appears at quantum criticality and this serves as an experimental manner to produce dark matter. In living matter dark matter identified in this manner would play a central role. Magnetic bodies carrying dark matter at their flux tubes would control ordinary matter and carry information.

  4. I started the work with the hierarchy of Planck constants from the proposal of Nottale stating that it makes sense to talk about gravitational Planck constant hgr=GMm/v0, v0/c≤ 1 (the interpretation of symbols should be obvious). Nottale found that the orbits of inner and outer planets could be modelled reasonably well by applying Bohr quantization to planetary orbits with tge value of velocity parameter differing by a factor 1/5. In TGD framework hgr would be associated with magnetic flux tubes mediating gravitational interaction between Sun with mass M and planet or any object, say elementary particle, with mass m. The matter at the flux tubes would be dark as also gravitons involved. The Compton length of particle would be given by GM/v0 and would not depend on the mass of particle at all.

    The identification hgr=heff is an additional hypothesis motivated by quantum biology, in particular the identification of biophotons as decay products of dark photons satisfying this condition. As a matter fact, one can talk also about hem assignable to electromagnetic interactions: its values are much lower. The hypothesis is that when the perturbative expansion for two particle system does not converge anymore, a phase transition increasing the value of the Planck constant occurs and guarantees that coupling strength proportional to 1/heff increases. This is one possible interpretation for quantum criticality. TGD provides a detailed geometric interpretation for the space-time correlates of quantum criticality.

    Macroscopic gravitational bound states not possible in TGD without the assumption that effective string tension associated with fermionic strings and dictated by strong form of holography is proportional to 1/heff2. The bound states would have size scale of order Planck length since for longer systems string energy would be huge. heff=hgr makes astroscopic quantum coherence unavoidable. Ordinary matter is condensed around dark matter. The counterparts of black holes would be systems consisting of only dark matter.

  5. Zero energy ontology (ZEO) is central element of TGD. There are many motivations for it. For instance, Poincare invariance in standard sense cannot make sense since in standard cosmology energy is not conserved. The interpretation is that various conserved quantum numbers are length scale dependent notions.

    Physical states are zero energy states with positive and negative energy parts assigned to ends of space-time surfaces at the light-like boundaries of causal diamonds (CDs). CD is defined as Cartesian products of CP2 with the intersection of future and past directed lightcones of M4. CDs form a fractal length scale hierarchy. CD defines the region about which single conscious entity can have conscious information, kind of 4-D perceptive field. There is a hierarchy of WCWs associated with CDs. Consciously experienced physics is always in the scale of given CD.

    Zero energy states identified as formally purely classical WCW spinor fields replace positive energy states and are analogous to pairs of initial and final, states and the crossing symmetry of quantum field theories gives the mathematical motivation for their introduction.

  6. Quantum measurement theory can be seen as a theory of consciousness in ZEO. Conscious observer or self as a conscious entity becomes part of physics. ZEO gives up the assumption about unique universe of classical physics and restricts it to the perceptive field defined by CD.

    In each quantum jump a re-creation of Universe occurs. Subjective experience time corresponds to state function reductions at fixed, passive bounary of CD leaving it invariant as well as state at it. The state at the opposite, active boundary changes and also its position changes so that CD increases state function by state function reduction doing nothing to the passive boundary. This gives rise to the experienced flow of geometric time since the distance between the tips of CD increases and the size of space-time surfaces in the quantum superposition increases. This sequence of state function reductions is counterpart for the unitary time evolution in ordinary quantum theory.

    Self "dies" as the first state function reduction to the opposite boundary of CD meaning re-incarnation of self at it and a reversal of the arrow of geometric time occurs: CD size increases now in opposite time direction as the opposite boundary of CD recedes to the geometric past reduction by reduction.

    Negentropy Maximization Principle (NMP) defines the variational principle of state function reduction. Density matrix of the subsystem is the universal observable and the state function reduction leads to its eigenspaces. Eigenspaces, not only eigenstates as usually.

    Number theoretic entropy makes sense for the algebraic extensions of rationals and can be negative unlike ordinary entanglement entropy. NMP can therefore lead to a generation of NE if the entanglement correspond to a unitary entanglement matrix so that the density matrix of the final state is higher-D unit matrix. Another possibility is that entanglement matrix is algebraic but that its diagonalization in the algebraic extension of rationals used is not possible. This is expected to reduce the rate for the reduction since a phase transition increasing the size of extension is needed.

    The weak form of NMP does not demand that the negentropy gain is maximum: this allow the conscious entity responsible for reduction to decide whether to increase maximally NE resources of the Universe or not. It can also allow larger NE increase than otherwise. This freedom brings the quantum correlates of ethics, moral, and good and evil. p-Adic length scale hypothesis and the existence of preferred p-adic primes follow from weak form of NMP and one ends up naturally to adelic physics.

The analogs of blackholes in TGD

Could blackholes have any analog in TGD? What about Hawking radiation? The following speculations are inspired by the above general vision.

  1. Ordinary blackhole solutions are not appropriate in TGD. Interior space-time sheet of any physical object is replaced with an Euclidian space-time region. Also that of blackhole by perturbation argument based on the observation that if one requires that the radial component of blackhole metric is finite, the horizon becomes light-like 3-surface analogous to the light-like orbit of partonic 2-surface and the metric in the interior becomes Euclidian.
  2. The analog of blackhole can be seen as a limiting case for ordinary astrophysical object, which already has blackhole like properties due to the presence of heff=n× h dark matter particles, which cannot appear in the same vertices with visible manner. Ideal analog of blackhole consist of dark matter only, and is assumed to satisfy the hgr=heff already discussed. It corresponds to region with a radius equal to Compton length for arbitrary particle R=GM/v0=rS/2v0, where rS is Schwartschild radius. Macroscopic quantum phase is in question since the Compton radius of particle does not depend on its mass. Blackhole limit would correspond to v0/c→ 1 and dark matter dominance. This would give R=rS/2. Naive expectation would be R=rS (maybe factor of two is missing somewhere: blame me!).
  3. NMP implies that information cannot be lost in the formation of blackhole like state but tends to increase. Matter becomes totally dark and the NE with the partonic surfaces of external world is preserved or increases. The ingoing matter does not fall to a mass point but resides at the partonic 2-surface which can have arbitrarily large surface. It can have also wormholes connecting different regions of a spherical surface and in this manner increase its genus. NMP, negentropy , negentropic entanglement between heff=n× h dark matter systems would become the basic notions instead of second law and entropy.
  4. There is now a popular article explaining the intuitive picture behind Hawking's proposal. The blackhole horizon would involve tangential flow of light and particles of the infalling matter would induce supertranslations on the pattern of this light thus coding information about their properties to this light. After that this light would be radiated away as analog of Hawking radiation and carry out this information.

    The objection would be that in GRT horizon is no way special - it is just a coordinate singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. So: why light would rotate around it? No reason for this!

    The answer in TGD would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!

    The objection now is that photons of light front should propagate in direction normal to it, not parallel. The point is however that this light-like 3-surface is the surface at which induced 4-metric becomes degenerate: hence massless particles can live on it.

  5. The replacement of second law with NMP leads to ask whether a generalization of blackhole thermodynamics does make sense. Since blackhole thermodynamics characterizes Hawking radiation, the generalization could make sense at least if there exist analog for the Hawking radiation. Note that also geometric variant of second law makes sense.

    Could the analog of Hawking radiation be generated in the first state function reduction to the opposite boundary, and be perhaps be assigned with the sudden increase of radius of the partonic 2-surface defining the horizon? Could this burst of energy release the energy compensating the generation of gravitational binding energy? This burst would however have totally different interpretation: even gamma ray bursts from quasars could be considered as candidates for it and temperature would be totally different from the extremely low general relativistic Hawking temperature of order

    TGR=[hbar/8π GM ] ,

    which corresponds to an energy assignable to wavelength equal to 4π times Schwartschild radius. For Sun with Schwartschild radius rS=2GM=3 km one has TGR= 3.2× 10-11 eV.

One can of course have fun with formulas to see whether the generalizaton assuming the replacement h→ hgr could make sense physically. Also the replacement rS→ R, where R is the real radius of the star will be made.
  1. Blackhole temperature can be formally identified as surface gravity

    T=(hgr/hbar) × [GM/2π R2] = [hgr/h] × [rS2/R2]× TGR = 1/[4π v0] [rS2/R2] .

    For Sun with radius R= 6.96× 105 km one has T/m= 3.2× 10-11 giving T= 3× 10-2 eV for proton. This is by 9 orders higher than ordinary Hawking temperature. Amazingly, this temperature equals to room temperature! Is this a mere accident? If one takes seriously TGD inspired quantum biology in which quantum gravity plays a key role (see this) this does not seem to be the case. Note that for electron the temperature would correspond to energy 3/2× 10-5 eV which corresponds to 4.5 GHz frequency for ordinary Planck constant.

    It must be however made clear that the value of v0 for dark matter could differ from that deduced assuming that entire gravitational mass is dark. For M→ MD= kM and v0→ k1/2v0 the orbital radii remain unchanged but the velocity of dark matter object at the orbit scales to k1/2v0. This kind of scaling is suggested by the fact that the value of hgr seems to be too large as compared by the identification of biophotons as decay results of dark photons with heff=hgr (some arguments suggest the value k≈ 2× 10-4).

    Note that for the radius R=[rS/2v0π] the thermal energy exceeds the rest mass of the particle. For neutron stars this limit might be achieved.

  2. Blackhole entropy

    SGR= [A/4 hbar G]= 4π GM2/hbar=4π [M2/MPl2]

    would be replaced with the negentropy for dark matter making sense also for systems containing both dark and ordinary matter. The negentropy N(m) associated with a flux tube of given type would be a fraction h/hgr from the total area of the horizon using Planck area as a unit:

    N(m)=[h/hgr] × [A/4hbar G]= [h/hgr] × [R2/rS2] ×SGR = v0×[M/m]× [R2/rS2] .

    The dependence on m makes sense since a given flux tube type characterized by mass m determining the corresponding value of hgr has its own negentropy and the total negentropy is the sum over the particle species. The negentropy of Sun is numerically much smaller that corresponding blackhole entropy.

  3. Horizon area is proportional to (GM/v0)2∝ heff2 and should increase in discrete jumps by scalings of integer and be proportional to n2.
How does the analog of blackhole evolve in time? The evolution consists of sequences of repeated state function reductions at the passive boundary of CD followed by the first reduction to the opposite boundary of CD followed by a similar sequence. These sequences are analogs of unitary time evolutions. This defines the analog of blackhole state as a repeatedly re-incarnating conscious entity and having CD, whose size increases gradually. During given sequence of state function reductions the passive boundary has constant size. About active boundary one cannot say this since it corresponds to a superposition of quantum states.

The reduction sequences consist of life cycles at fixed boundary and the size of blackhole like state as of any state is expected to increase in discrete steps if it participates to cosmic expansion in average sense. This requires that the mass of blackhole like object gradually increases. The interpretation is that ordinary matter gradually transforms to dark matter and increases dark mass M= R/G.

Cosmic expansion is not observed for the sizes of individual astrophysical objects, which only co-move. The solution of the paradox is that they suddenly increase their size in state function reductions. This hypothesis allows to realize Expanding Earth hypothesis in TGD framework (see this). Number theoretically preferred scalings of blackhole radius come as powers of 2 and this would be the scaling associated with Expanding Earth hypothesis.

See the chapter Criticality and dark matter or the article TGD view about black holes and Hawking radiation.



About negentropic entanglement as an analog of error correcting code

In classical computation, the simplest manner to control errors is to take several copies of the bit sequences. In quantum case no-cloning theorem prevents this. Error correcting codes code n information qubits to the entanglement of N>n physical qubits. Additional contraints represents the subspace of n-qubits as a lower-dimensional sub-space of N qubits. This redundant representation is analogous to the use of parity bits. The failure of the constraint to be satisfied tells that the error is present and also the character of error. This makes possible the automatic correction of the error is simple enough - such as the change of the phase of spin state or or spin flip.

Negentropic entanglement (NE) obviously gives rise to a strong reduction in the number of states of tensor product. Consider a system consisting of two entangled systems consisting of N1 and N2 spins. Without any constraints the number of states in state basis is 2N1× 2N2 and one as N1+N2 qubits. The elements of entanglement matrix can be written as EA,B, A== ⊗i=1N1 (mi,si), B== ⊗k=1N2 (mk,sk) in order to make manifest the tensor product structure. For simplicity one can consider the situation N1=N2=N.

The un-normalized general entanglement matrix is parametrized by 2× 22N independent real numbers with each spin contributing two degrees of freedom. Unitary entanglement matrix is characterized by 22N real numbers. One might perhaps say that one has 2N real bits instead of almost 2N+1 real qubits. If the time evolution according to ZEO respects the negentropic character of entanglement, the sources of errors are reduced dramatically.

The challenge is to understand what kind of errors NE eliminates and how the information bits are coded by it. NE is respected if the errors act as unitary transformations E→ UEU of the unitary entanglement matrix. One can consider two interpretations.

  1. The unitary automorphisms leave information content unaffected only if they commute with E. In this case unitary automorphisms acting non-trivially would give rise genuine errors and an error correction mechanism would be needed and would be coded to quantum computer program.
  2. One can also consider the possibility that the unitary automorphisms do not affect the information content so that the diagonal form of entanglement matrix coded by N phases would carry of information. Clearly, the unitary automorphisms would act like gauge transformations. Nature would take care that no errors emerge. Of course, more dramatic things are in principle allowed by NMP: for instance, the unitary entanglement matrix could reduce to a tensor product of several unitary matrices. Negentropy could be transferred from the system and is indeed transferred as the computation halts.

    By number theoretic universality the diagonalized entanglement matrix would be parametrized by N roots of unity with each having n possible values so that nN different NEs would be obtained and information storage capacity would be I=log(n)/log(2) × N bits for n=2k one would have k× N bits. Powers of two for n are favored. Clearly the option for which only the eigenvalues of E matter, looks more attractive realization of entanglement matrices. If overall phase of E does not matter as one expects, the number of full bits is k× N-1. This option looks more attractive realization of entanglement matrices.

    In fact, Fermat polygons for which cosine and sine for the angle defining the polygon are expressible by iterating square root besides basic arithmetic operations for rationals (ruler and compass construction geometrically) correspond to integers, which are products of a power of two and of different Fermat primes Fn=22n+1. l

This picture can be related to much bigger picture.
  1. In TGD framework number theoretical universality requires discretization in terms of algebraic extension of rationals. This is not performed at space-time level but for the parameters characterizing space-time surfaces at the level of WCW. Strong form of holography is also essential and allows to consider partonic 2-surfaces and string world sheets as basic objects. Number theoretical universality (adelic physics) forces a discretization of phases and number theoretically allowed phases are roots of unity defined by some algebraic extension of rationals. Discretization can be also interpreted in terms of finite measurement resolution. Notice that the condition that roots of unity are in question realizes finite measurement resolution in the sense that errors have minimum size and are thus detectable.
  2. Hierarchy of quantum criticalities corresponds to a fractal inclusion hierarchy of isomorphic sub-algebras of the super-symplectic algebra acting as conformal gauge symmetries. The generators in the complement of this algebra can act as dynamical symmetries affecting the physical states. Infinite hierarchy of gauge symmetry breakings is the outcome and the weakening of measurement resolution would correspond to the reduction in the size of the broken gauge group. The hierarchy of quantum criticalities is accompanied by the hierarchy of measurement resolutions and hierarchy of effective Planck constants heff=n× h.
  3. These hierarchies are argued to correspond to the hierarchy of inclusions for hyperfinite factors of type II1 labelled by quantum phases and quantum groups. Inclusion defines finite measurement resolution since included sub-algebra does induce observable effects on the state. By Mac-Kay correspondence the hierarchy of inclusions is accompanied by a hierarchy of simply laced Lie groups which get bigger as one climbs up in the hierarchy. There interpretation as genuine gauge groups does make sense since their sizes should be reduced. An attractive possibility is that these groups are factor groups G/H such that the normal subgroup H (necessarily so) is the gauge group and indeed gets smaller and G/H is the dynamical group identifiable as simply laced group which gets bigger. This would require that both G and H are infinite-dimensional groups. An interesting question is how they relate to the super-symplectic group assignable to "light-cone boundary" δ M4+/-× CP2. I have proposed this interpretation in the context of WCW geometry earlier.
  4. Here I have spoken only about dynamical symmetries defined by discrete subgroups of simply laced groups. I have earlier considered the possibility that discrete symmetries provide a description of finite resolution, which would be equivalent with quantum group description.
Summarizing, these arguments boil down to the conjecture that discrete subgroups of these groups act as effective symmetry groups of entanglement matrices and realize finite quantum measurement resolution. A very deep connection between quantum information theory and these hierarchies would exist.

Gauge invariance has turned out to be a fundamental symmetry principle, and one can ask whether unitary entanglement matrices assuming that only the eigenvalues matter, could give rise to a simulation of discrete gauge theories. The reduction of the information to that provided by the diagonal form be interpreted as an analog of gauge invariance?

  1. The hierarchy of inclusions of hyper-finite factors of type II1 suggests strongly a hierarchy of effective gauge invariances characterizing measurement resolution realized in terms of hierarchy of normal subgroups and dynamical symmetries realized as coset groups G/H. Could these effective gauge symmetries allow to realize unitary entanglement matrices invariant under these symmetries.
  2. A natural parametrization for single qubit errors is as rotations of qubit. If the error acts as a rotation on all qubits, the rotational invariance of the entanglement matrix defining the analog of S-matrix is enough to eliminate the effect on information processing.

    Quaternionic unitary transformations act on qubits as unitary rotations. Could one assume that complex numbers as the coefficient field of QM is effectively replaced with quaternions? If so, the multiplication by unit quaternion for states would leave the physics and information content invariant just like the multiplication by a complex phase leaves it invariant in the standard quantum theory.

    One could consider the possibility that quaternions act as a discretized version of local gauge symmetry affecting the information qubits and thus reducing further their number and thus also errors. This requires the introduction of the analog of gauge potential and coding of quantum information in terms of SU(2) gauge invariants. In discrete situation gauge potential would be replaced with a non-integrable phase factors along the links of a lattice in lattice gauge theory. In TGD framework the links would correspond the fermionic strings connecting partonic two-surfaces carrying the fundamental fermions at string ends as point like particles. Fermionic entanglement is indeed between the ends of these strings.

  3. Since entanglement is multilocal and quantum groups accompany the inclusion, one cannot avoid the question whether Yangian symmetry crucial for the formulation of quantum TGD (see this) could be involved.
For details see the chapter Negentropy Maximization Principle or the article Quantum Measurement and Quantum Computation in TGD Universe.



Quantum measurement and quantum computation in TGD Universe

During years I have been thinking how quantum computation could be carried out in TGD Universe (see this). There are considerable deviations from the standard view. Zero Energy Ontology (ZEO), weak form of NMP dictating the dynamics of state function reduction, negentropic entanglement (NE), and hierarchy of Planck constants define the basic differences between TGD based and standard quantum measurement theory. TGD suggests also the importance of topological quantum computation (TQC) like processes with braids represented as magnetic flux tubes/strings along them.

The natural question that popped in my mind was how NMP and Zero Energy Ontology (ZEO) could affect the existing view about TQC. The outcome was a more precise view about TQC. The basic observation is that the phase transition to dark matter phase reduces dramatically the noise affecting quantum quits. This together with robustness of braiding as TQC program raises excellent hopes about TQC in TGD Universe. The restriction to negentropic space-like entanglement (NE) defined by a unitary matrix is something new but does not seem to have any fatal consequences as the study of Shor's algorithm shows.

NMP strongly suggests that when a pair of systems - the ends of braid - suffer state function reduction, the NE must be transferred somehow from the system. How? The model for quantum teleportation allows to identify a possible mechanism allowing to achieve this. This mechanism could be fundamental mechanism of information transfer also in living matter and phosphorylation could represent the transfer of NE according to this mechanism: the transfer of metabolic energy would be at deeper level transfer of negentropy. Quantum measurements could be actually seen as transfer of negentropy at deeper level.

For details see the chapter Negentropy Maximization Principleor the article Quantum Measurement and Quantum Computation in TGD Universe.



Can bacteria induce superfluidity?

Claims about strange experimental findings providing support for TGD have started to accumulate in accelerating pace. During about week I have learned about four anomalies! The identification of the dark matter as heff phases is the common denominator of the explanations of these findings.

  1. First I learned about 2 TeV bump at LHC providing evidence for MG,79 hadron physics (I had realized that it might show itself at LHC only few weeks earlier!
  2. Then emerged the finding that the knockdown of genes need not affect gene expression providing support for the vision that dark analogs of basic bio-molecules identifiable in terms of dark proton states are behind the biochemistry serving only as a shadow for the deeper quantum biology.
  3. Two days ago I learned about the discoveries about Pluto by New Horizons space probe and having explanation in terms of the same model that justifies Expanding Earth hypothesis in TGD framework explaining among other things the mysteries of Cambrian explosion in biology.
  4. Today I learned from Nature News that a team led by Auradou et al reports in the article "Turning Bacteria Suspensions into Superfluids" published in Phys Rev Letters that bacterium swimming in fluid do not only reduce its viscosity associated with shear stress (viscous force parallel to the surface) but makes it to behave in super-fluid like manner above a critical concentration of bacteria.

As the number of bacteria (E. coli) was increased, the viscosity associated with shear stress (the viscous force parallel to the surface) dropped: this in accordance with theoretical expectations. Adding about 6 billion cells (the fluid volume is not mentioned but it seems that the effect occurs above critical density of bacteria), the apparent viscosity dropped to zero - or more precisely, below the experimental resolution. The super-fluid like behavior was preserved above the critical concentration. What is important that this did not happen for dead bacteria: bacteria play an active role in the reduction of viscosity.

Researchers are not able to identify the mechanism leading to the superfluid-like behavior but some kind of collective effect is believed to be in question. The findings suggest that the flagellae - kind of spinning hairs used by the bacteria to propel themselves - should play an essential part in the phenomenon. As bacteria swim, they fight against current, decreasing the local forces between molecules that determine the fluid's viscosity. Above critical density the local effects would somehow become global.

Cates et al have proposed this kind of phenomenon: see the article "Shearing Active Gels Close to the Isotropic-Nematic Transition". The authors speak in the abstract about zero apparent viscosity.

  1. The title of the article of Cates et al tells that the phenomenon occurs near isotropic-nematic transition. Nematic is defined as a liquid crystal for which the molecules are thread-like and parallel. I dare guess that in the recent case the approximately parallel flagellae would be modelled as liquid crystal like 2-D phase at the surface of bacterium. In the isotropic phase the orientations of the flagellae would be uncorrelated and long range orientational correlations would emerge in the phase transition to nematic phase.
  2. Also the notions of contractile and extensile gels are introduced. Contraction and extension of gels are though to occur through molecular motors. The transformation of the fluid to apparent superfluid would require energy to run the molecular motors using metabolic energy and ordinary superfluidity would not be in question.
  3. The model predicts divergence of viscosity for contractile gels. For extensile gels a zero of apparent viscosity is predicted. There is a hydrodynamical argument for how this would occur but I did not understand it. The active behavior of the bacteria would means that the gel like surface phase (nematic liquid crystal) formed by the flagellae extends to reduce viscosity. If I have understood correctly, this applies only to the behavior of single bacterium and is about the reduction of viscosity in the immediate vicinity of cell.
My deep ignorance about rheology allows me freedom to speculate freely about the situation in TGD framework.
  1. In TGD inspired biology gel phase corresponds to a phase, which involves flux tube connections between basic units. Flux tubes contain dark matter with non-standard value heff=n× h. The heff changing phase transitions scaling the lengths of flux tubes proportional to heff are responsible for the contractions and extensions of gel.

    The extension of the gel should lead to a reduction of viscosity since one expects that dissipative effects are reduced as heff increases and quantum coherence is established in longer scales. Large heff phases are associated with criticality. Now the criticality would be associated with isotropic-nematic phase transition. The parallelization of flagellae would be due to the quantum coherence assignable with the flagellae.

    Note that the mechanism used by bacteria to control the liquid flow would be different since now molecular motors are replaced by heff changing phase transitions playing key role in TGD inspired view about biochemistry. For instance, reacting biomolecules find each other by heff reducing phase transition contracting the flux tubes connecting them.

  2. This model does not yet explain the reduction of apparent viscosity to zero in the entire fluid occurring above a critical density of bacteria. What could happen could be analogous to the emergence of high Tc superconductivity according to TGD. Below pseudo gap temperature the emergence of magnetic flux tube pairs makes possible super-conductivity in short scales. At critical temperature a phase transition in which flux tubes reconnect to form larger thermodynamically stable networks occurs. One can speak about quantum percolation.

    The reduction of viscosity for a single bacterium could be based on the phase transition of liquid molecules to dark molecules flowing along the resulting flux tubes with very small friction (large heff) but only below certain scale smaller than the typical distance between bacteria. This would be the analog for what happens below pseudo gap. Above critical density he magnetic flux tubes associated with bacteria would reconnect and forming a net of connected flux tube paths at scale longer than inter-bacterial distances. This would be the counterpart for the emergence of superconductivity by percolation in long scales.

See the chapter Criticality and dark matter".



Topological order in Quantum TGD

Topological order is a rather advanced concept of condensed matter physics. There are several motivations for the notion of topological order in TGD.

  1. TGD can be seen as almost topological QFT. 3-D surfaces are by holography equivalent with 4-D space-time surfaces and by strong form of holography equivalent with string world sheets and partonic 2-surfaces. What make this duality possible is super-symplectic symmetry realizing strong form of holography and quantum criticality realized in terms of hierarchy of Planck constants characterizing hierarchy of phases of ordinary matter identified as dark matter. This hierarchy is accompanied by a fractal hierarchy of sub-algebras of supersymplectic algebra isomorphic to the entire algebra: Wheeler would talk about symmetry breaking without symmetry breaking.
  2. heff=n× h hierarchy corresponds to n-fold singular covering of space-time surface for which the sheets of the covering co-incide at the boundaries of the causal diamond (CD), and the n sheets together with superconformal invariance give rise n additional discrete topological degreees of freedom - one has particles in space with n points. Kähler action for preferred extremals reduces to Abelian Chern-Simons terms characterizing topological QFT. Furthermore, the simplest example of topological order - point like particles, which can be connected by links - translates immediately to the collections of partonic 2-surfaces and strings connecting them.
  3. There is also braiding of fermion lines/magnetic flux tubes and Yangian product and co-product defining fundamental vertics, quantum groups associated with finite measurement resolution and described in terms of inclusions of hyper-finite factors.

In the article Topological order and Quantum TGD topological order and its category theoretical description are considered from TGD point of view - category theoretical notions are indeed very natural in TGD framework. The basic finding is that the concepts developed in condensed matter physics (topological order, rough description of states as tangles (graphs imbedded in 3-D space), ground state degeneracy, surface states protected by symmetry or topology) fit very nicely to TGD framework and has interpretation in terms of the new space-time concept. This promises applications also in the conventional areas of condensed matter physics such as more precise description of solid, liquid, and gas phases.

See the chapter Criticality and dark matter" or the article Topological order in Quantum TGD

.



Deconstruction and reconstruction in quantum physics and conscious experience

Deconstruction means roughly putting something into pieces. Often deconstruction is thought to involve also the reconstruction. This process is applied in deconstructivistic architecture as one can learn by going to Wikipedia and also cubism brings in mind this kind of approach. Reconstruction organizes typical features of existing styles in new - one might even say "crazy" manner. There can be even a kind of "social interaction" between buildings: as if they were communicating by exchanging features. Similar recombination of elements from various styles have appeared also in music - neoclassicism comes in mind immediately.

Postmodernism is a closely related movement and claims that truths are social constructs: great narratives are dead. Nothing could irritate more the physicist who has learned how much mistakes, wrong tracks, and hard work are needed to distill the truth! Everything does not go! On the other hand, one can argue that the recent state of stagnation in the frontier of theoretical physics suggests that postmodernists are right. Superstrings and multiverse are definitely highly social constructs: superstrings were the only game in the town for decades but now American Mathematical Society is worried that super string theoreticians are spoiling the public image of science. Multiverse was in fashion only few years. Certainly one great narrative - the story or reductionism and materialism thought to find its final culmination as M-theory - is dead. It is however nonsense to claim that all great narratives are dead. That telling alternative great narratives in respected journals is impossible does not mean that they are dead!

But the association of deconstruction with postmodernism does not justify throwing away the ideas of deconstruction and reconstruction. Rather, one can ask whether they could be made part of a new great narrative about physical world and consciousness.

1. Deconstruction and reconstruction in perception, condensed matter physics and in TGD inspired theory of consciousness

Deconstruction and reconstruction appear in the construction of percepts, in condensed matter physics, and are also part of TGD inspired theory of consciousness.

1.1 Perception

The very idea of deconstruction in architectural sense is highly interesting from the perspective of both quantum physics and consciousness.

I was astonished as I learned for about 35 years ago that the buildup of our perception involves very concretely what I would now call deconstruction and reconstruction and I could not understand why this. First the sensory input is decomposed into features. Edges, corners, positions, motions analyzed to direction and velocity, colors,... Objects are replaced with collections of attributes: position, motion, shape, surface texture, color,.... Deconstruction occurs at lower cortical layers. After this reconstruction takes place: various kinds of features are combined together through a mysterious looking process of binding - and the outcome is a percept.

Reconstruction can occur also in "wrong" manner. This occurs in hallucinations, delusions, and dreams. Humour is based on association of "wrong" things together, making intentional category errors. Synesthesia involves association between different sensory modalities: note with a given pitch has characteristic color or numbers correspond to colors or shapes. I remember an article telling about how subject persons in hypnosis can experience what circle with four corners looks like. Some attribute can be lacking from the reconstruction: person can perceive the car as object but not its motion. The car is there now and a moment later it is here. Nothing between.

Also non - standard reconstructions are possible. Could these non-standard reconstructions define a key aspect of creativity? Could reconstruction create in some lucky situations new idea rather than hallucination or delusion?

For few years ago I listened a radio document about a professional, who builds soundscapes to movies and learned that the construction of soundscape is deconstruction followed by reconstruction. One starts from natural sounds but as such they are not very impressive: driving by car over some-one does not create any dramatic sound effect - just "splat" - nothing else. This is so non-dramatic that it has been be used to create black humour. In order to cure the situation the real sounds are analyzed to features and then reconstructed by amplifying some features and by throwing away the unessential ones. The fictive output sounds much more real than the real input. Actors are masters of this technique and this is why videos about ordinary people doing something funny is like looking autistic ghosts. And if you look at the collection of modules of video game you see modules with name "Aargh", "Auch", "Bangggg", etc..

Association is the neuroscientist's key notion allowing to get an idea about what happens in reconstruction. Reconstruction involves association of various features to form percepts. First this process occurs for various sensory modalities. These intermediate sensory percepts are then combined to full percept in association regions.

But what associations are at deeper level? What features are? Heretic could ask whether they could correspond to conscious experiences not conscious to us but conscious at lower sub-conscious level. Reader perhaps noticed that deconstruction and reconstruction took place here: the student is not supposed to ask this question since most theories of consciousness for some funny reason - maybe a pure accident - make the assumption that consciousness has no structure - no selves with subselves with sub-selves with... For physicist this kind deconstruction of consciousness is very natural. How do these features bind to our conscious percepts? Neuroscience alone cannot tell much about this since it is based on physicalism: "hard problem" articulates this dead end.

The following considerations represent deconstructions and reconstructions, and I will not explicitly mention when this happens - just warning.

1.2 Condensed matter physics

One must bring in some basic notions of quantum theory if one wants to reduce de- and reconstruction to quantum physics. The key mathematical fact is that in quantum theory each particle in many-particle state corresponds to a tensor factor of state space of the entire system. This notion is very difficult to explain without actually having a lecture series about quantum theory and I prove in the following that this is indeed the case.

  1. The space of quantum states of system is the basic notion: technically it is known as Hilbert space, which can have finite or even infinite dimension and also infinite dimension (and this in many senses!).

    The basic idea is that one can build bigger Hilbert spaces as their tensor products. If you have Hilbert spaces of dimensions n1 and n2, the tensor product has dimension n1× n2. This is algebraically like multiplying numbers and one can indeed identify prime Hilbert spaces as those with prime dimension. Also direct sums of Hilbert spaces are possible.

    Hilbert spaces represent physical systems: say electron and proton. To describe a world consisting of proton and electron one forms the tensor product of electron and proton Hilbert spaces. This is somewhat like playing with legos.

    I was cheating a little bit. Life is not quite so simple. One can also form bound states of two systems - say hydrogen atom from proton and electron, and the bound states of hydrogen atom represent only a sub-space of the tensor product. Connes tensor product is more exotic example: only certain kind of entangled states in tensor product for which the composites are strongly correlated are allowed. As a matter of fact, also the gluing the legos together creates strong correlations between them so that it serves as a good analogy for Connes tensor product and tensor product assignable to bound states.

  2. Even elementary particles have several degrees of freedom - say spin and charge - to which one can assign Hilbert spaces decomposing formally into tensor product of Hilbert spaces associated with these degrees of freedom. Sub-space of the full tensor product is allowed, and one can purely formally say that elementary particle is a bound state of even more elementary particles. Somewhat like written word having meaning to us consists of letters, which as such represent nothing to us (but could represent something to lower level conscious entities). Could it be possible to apply deconstruction to elementary particles?

Now comes the surprise: condensed matter physicists have discovered deconstruction long time ago)! Condensed matter electron can be deconstructed under some circumstances.

  1. Electron in the valence band of conductor has three kinds of degrees of freedom labelled by spin, charge and orbital state - state of electron in atom - characterizing the valence band. The state of electron decomposes in purely formal sense to a bound state of spinon, chargon, and holon carrying spin, charge, and phase of electron wave function. Could one deconstruct this bound state to its composites? If so, one would have effectively three particles - three quantum waves moving with different velocities. For free electrons obeying Dirac equation this is certainly impossible. But this might be (and is!) possible in condensed matter.

    Instead of single wave motion there can be three free wave motions occurring with different velocities (wave vectors) corresponding to spinon, chargon and holon. In popular articles this process is called "splitting" of electron. The term is optimal choice if the purpose is to create profound mis-understandings in the layman reader associating naturally splitting with a geometric process of putting tiny ball into pieces. As already explained, it is Hilbert space, which is split into tensor factors, not the tiny ball. The correlations between factors forced by bound state property are broken in this divorce of degrees of freedom.

  2. What condensed matter theorist propose is roughly following. The consideration is restricted to effectively one-dimensional systems - wires. Electron has spin, charge, and orbital degrees of freedom if in conduction band and delocalized and thus shared by the atoms. Usually these degrees of freedom are bound to single entity.

    The holy trinity of charge, spin, and orbital degrees of freedom can be however split under some circumstances prevailing in condensed matter. The phase of the spinor representing electron can vary along wire and defines wave motion with some velocity/wave vector assignable with the ordinary electric current. The spin of electron can rotate at each point and the phase of this rotation can vary along wire so that a quantum wave moving along wire with velocity different from that for charge: this is spin wave having as classical analog the rotation of bicycle pedals. If the wire is a linear lattice of atoms, the orbital excitation can also vary along the wire and a third quantum wave moving with its own velocity is possible. One has three particle like entities moving with different velocities! This kind of waves are certainly not possible for the solutions of Dirac equation representing freely moving fermions and particle physicists do not encounter them.

  3. These wave motions are different from the wave motions associated with phonons and magnons. For sound it is periodic oscillation for the position of atom, which propagates in sound wave. For magnon it is change of spin value, which propagates and defines a spin 1 collective excitation. Spinon as a quasiparticle has spin 1/2 so that spinon and magnon are different things. Spinon is formal constituent of electron made visible by the condensed matter environment. Magnon is collective excitation of condensed matter system.

    Spin currents provide an example of a situation in which spin and charge currents can flow at different speeds and are becoming important in a new technology known as spintronics. Spin currents have very low resistance and the speculation is that they might relate to high Tc super conductivity.

From the articles that I have seen one might conclude that deconstruction is in practice possible only for effectively 1-dimensional systems. I do not see any obvious mathematical reason why the deconstruction could not occur also in higher-dimensional systems.

It is however true that 1-dimensional systems are very special physically. Braid statistics replaces ordinary statistics bringing in a lot of new effects. Furthermore, 2-D integrable gauge theories allow to model interactions as permutations of quantum numbers and lead to elegant models describing deconstructed degrees of fields as quantum fields in 2-D Minkowski space with interactions reducing to 2-particle interactions decribable in terms of R-matrix satisfying the Yang-Baxter equations. It is difficult to say how much the association of deconstruction to 1-D systems is due the fact that they are mathematically easier to handle than higher-D ones and there is existing machinery.

The rise of superstring models certainly was to a high degree due to this technical easiness. As I tried to tell about 3-surfaces replacing strings as fundamental dynamical objects, the almost reflect like debunking of this idea was to say that super-conformal invariance of super-string models is lost and the theory is not calculable and does not even exist - period. It took indeed a long time to realize that super-conformal symmetry allows a huge generalization, when space-time is 4-D and imbedding space has Minkowski space as its Cartesian factor. Twistorial considerations fixed the imbedding space uniquely to M4× CP2. The lesson is clear: theoretician should be patient and realize that theory building is much more than going to math library and digging the needed mathematics. Maybe colleagues are mature to learn this lesson some day.

1.3 TGD inspired theory of consciousness

The believer in quantum consciousness of course wonders what could be the quantum counterparts of de- and reconstruction as mechanism of perception. It would seem that analysis and synthesis of the sensory input deconstructs the mental image associated with it to features - perhaps simpler fundamental mental images - and reconstruct from these the percept as mental image. What does this correspond at the level of physics?

Before one can really answer one must understand what the quantum physical correlates of mental image are. How mental images die and are born? What features are as mental images? What their binding to sensory percepts does mean physically?

Here I can answer only on my own behalf and to do it I must introduce the basic notions and ideas of TGD inspired theory of consciousness. I will not go into details here because I have done this so many times and just suggest that the reading of some basic stuff about TGD inspired theory of consciousness. Suffice it to list just the basic ideas and notions.

  1. Zero energy ontology (ZEO), the closely related causal diamonds (CDs), and hierarchy of Planck constants assignable to quantum criticality are basic notions. Number theoretic vision is also central. In particular, adelic physics fusing real physics and various p-adic physics as correlates for cognition is also a basic building brick.
  2. TGD inspired theory of consciousness theory can be seen as a generalization of quantum measurement theory constructed to solve the basic problems of ordinary quantum measurement theory: observer becomes "self" - conscious entity - described by physics and part of physical system rather than being an outsider. Consciousness does not cause state function reduction: consciousness is state function reduction. Consciousness is therefore not in the world but between two worlds. This resolves the basic paradox of quantum measurement theory since there are two causalities: causality of consciousness and causality of field equations.

    Negentropy Maximization Principle (NMP) defines the basic variational principle. Strong form of NMP states that the negentropy gain in state function reduction is maximal. Weak form of NMP leaves for self free will in that self can choose also non-maximal negentropy gain. This makes possible universe with ethics and moral with good defined as something which increases negentropy resources of the Universe.

    Self hierarchy is the basic notion of TGD inspired theory of consciousness. Self experiences sub-selves as mental images. Self corresponds to a state function reduction sequence to the same boundary of causal diamond (CD). In standard quantum measurement theory this sequence does not change the state but in TGD framework the state at the opposite boundary of CD and even opposite boundary changes. This gives rise to the experience flow of time having the increases of the temporal distance between the tips of CD as a geometric correlate. Self dies as the first reduction to the opposite boundary takes place and re-incarnates at the opposite boundary as its time reversal. Negentropy Maximization Principle forces it to occur sooner or later. The continual birth and death of mental images supports this view if one accepts the idea about hierarchy. One can also consider a concrete identification for what the change of the arrow of time means for mental image (see this).

  3. Magnetic bodies carrying dark matter identified as heff=n× h phases of ordinary matter define quantum correlates for selves. Magnetic body has hierarchical onion-like structure and it communicates with biological body using dark photons propagating along magnetic flux tubes. EEG and its fractal generalizations make possible both communication from/control of biological body to/by magnetic body. Dark matter hierarchy can be reduced to quantum criticality and this in turn has deep roots in the adelic physics. Magnetic body means an extension of the usual organism-environment double to triple involving magnetic body as intentional agent using biological body for its purposes.
What reconstruction could mean in TGD inspired theory of consciousness?
  1. The restriction of deconstruction to the degrees of freedom of elementary particle i s un-necessarily restrictive. One can consider also larger units such a as ..., molecules, cells,... and corresponding magnetic bodies and their representations using tensor products.
  2. Besides bound state formation also negentropic entanglement (NE) allows states which are almost stable with respect to NMP. One can imagine two kinds of NE, which can be metastable with respect to NMP. In the first case density matrix is a projector with n identical eigenvalues of density matrix. This state can be an outcome of a state function reduction since it is an eigenstate of the universal observable defined by the density matrix.

    Density matrix has matrix elements in an algebraic extension algebraic extension of rationals characterizing the system in the evolutionary hierarchy. It can also happen that the eigenvalues of the density matrix (probabilities) do not belong to this extension. One can argue that since diagonalization is not possible inside the extension, also state function reduction is impossible without a phase transition extending the extension and identifiable as a kind of evolutionary step. This is assumed - at least tentatively.

    Both kinds of NEs would have natural place in the world order. The first kind of NE would correspond to a kind of enlightened consciousness since any orthonormal state basis would define an eigenstate basis of density matrix. Schrödinger cat would be exactly half alive and half dead or exactly half of X and half of Y, where X and Y are any orthonormal superpositions of alive and dead. For the second kind of NE there would be a unique state basis. For instance, the cat could be 1/21/2 alive and 1 - 1/21/2 dead. The words dead and alive have meaning. This would correspond to a state of rational mind discriminating between things. If a phase transition bringing into daylight 21/2 takes place, state function reduction makes cat fully alive or fully dead.

  3. In condensed matter example the velocity of quantal wave motion serves as a criterion allowing to tell whether the degrees of freedom bind or not. Velocity/wave fector is obviously too limited criterion for binding or its absence. In neuroscience the coherence of EEG is seen as a signature of binding: maybe oscillation with same EEG frequency could serve the signature of fusion of mental images to a larger one. In TGD inspired theory of consciousness EEG frequencies correspond to differences of generalized Josephson frequencies that is sums of Josephson frequency for the resting potential and of the difference of cyclotron frequencies for ions at different sides of cell membrane (see this, this, and this ).
  4. At the level of magnetic flux tubes binding would correspond to a reconnection of magnetic flux tubes of synchronously firing region to form a larger structure for which the magnetic field strength is same for the composites and therefore also cyclotron frequencies are identical. Reconstruction would have a concrete geometric correlate at the level of magnetic flux tubes as a reconnection. Different parts of brain containing quantum states serving as mental images defining features would connected by flux tubes of the magnetic body and binding of mental images would take place.
  5. In TGD inspired quantum biology dark matter identified as large heff=n× h phases give rise to a deconstruction if one accepts the hypothesis heff=hgr =GMm/v0, where M represents mass of dark matter and m particle mass (see this and this). Here hgr is assigned with a flux tube connecting masses M and m and v0 is a velocity parameter characterizing the system. This hypothesis implies that dark cyclotron energy is given Ec=hgrfc, where fc is cyclotron frequency, is independent of particle mass: universal cyclotron energy spectrum is the outcome. The dark cyclotron photons can transform to ordinary photons identified as biophotons having energy spectrum in visible and UV range, where also the energy spectrum of molecules is. Magnetic body could use dark photons to control bio-chemistry.

    What makes this also so remarkable is that particles with magnetic dipole moment possessing different masses correspond to different values of heff and reside at different magnetic flux tubes. This is mass spectroscopy - or deconstruction of matter by separating charged particles with different masses to their own dark worlds! Dark living matter would not be a random soup of particles: each charged particle (also neutral particles with magnetic dipole moment) would sit neatly at its own shelf labelled by hgr! In TGD inspired theory of consciousness magnetic flux tubes can be associated with magnetic bodies serving as correlates of selves so that deconstruction for mental images would reduce to this process with each charged particle representing one particular combination and perhaps also a quale (see this).

What about re-construction in this framework?

  1. In reconstruction flux tube connections between two subsystems representing sub-selves (experienced by self as mental images) would be formed so that they would fuse to single system characterized by the same cyclotron frequency. Flux tube connection would be formed by the reconnection of U-shaped flux tubes to form single pair of connecting flux tubes connecting the systems.

    Resonant exchange of dark cyclotron photons and also dark super-conductivity would accompany this process. This process would represent a correlate for directed attention and would take place already at bio-molecular level. I have proposed that bio-molecules with aromatic rings in which circulating electron pair currents generate magnetic bodies are especially important and in some sense fundamental level of the self hierarchy at molecular level \citeallb/pulse. In brain different brain regions could connect to single coherently firing region in this manner.

  2. The magnetic bodies associated with brain regions representing features could be connected in this manner to larger sub-selves. Negentropic quantum entanglement - a purely TGD based notion - could define a further correlate for the binding. This entanglement could take place in discrete degrees of freedom related to the hierarchy heff=n× h of Planck constants having no correlate in standard physics. The discrete degree of freedom would correspond to n sheets of singular coverings representing space-time surfaces. The sheets would co-incide at the ends of causal diamonds (CDs): one possible interpretation (holography allows many of them) could be that entire closed 3-surfaces formed by the unions of space-like 3-surfaces at the boundaries of CD and light-like 3-surfaces connecting them serve as basic objects.
  3. Reconstruction by negentropic quantum entanglement and flux tube connections inducing resonance could also lead to non-standard composites. Synesthesia could be understood in this manner and even the sensory experience about circle with four corners could be understood. The binding of left and right brain visual experiences to single one could take place through negentropic entanglement and effectively generate the experience of third dimension. The dimensions would not however simply add: 3-D experience instead of 4-D. The dream of a mathematician is to perceive directly higher dimensional objects. Could sensory perception of higher than 3-D objects be possible by a reconstruction fusing several visual percepts - maybe even from different brains - together? Could higher levels of self hierarchy carry out this kind of reconstruction? Could Mother Gaia fuse our experiences to single experience about what it is to be a human kind, species, or bio-sphere?

2. Could condensed matter physics and consciousness theory have something to share?

Magnetic bodies are present in all scales and one can ask whether consciousness theory and condensed matter physics might have something in common. Could the proposed picture of matter as consisting of selves with sub-selves with.... defining analogs of quasiparticles and collective excitations make sense even at the level of condensed matter? Could construction and reconstruction of mental images identifiable as sub-selves take place already at this level and have interpretation in terms of primitive information processing building standardized primitive mental images?

Deconstruction need not be restricted to electron and velocity could be replaced by oscillation frequency for various fields: at quantum level there is not actually real distinction since in quantum theory velocity defines wave vector. Also more complex objects, atoms, molecules, etc. could be deconstructed and the process could occur at the level of magnetic bodies and involve in essential manner reconnection and other "motor actions" of flux tubes. The notions of quasi-particle and collective excitation would generalized dramatically and the general vision about basic mechanism might help to understand this zoo of exotics.

Future condensed matter theorists might also consider the possibility of reconstruction in new manner giving rise to the analogs of synesthesia. Could features from different objects be recombined to form exotic quasi-objects having parts all around. Could dark matter in TGD sense be involved in an essential manner? Could cyclotron resonance or its absence serve as a correlate for the binding? Note that the disjoint regions of space would be in well-defined sense near to each other in the reconstructed state. Topology would be different: effective p-adic topology could provide a natural description for a situation: in p-adic topology systems at infinite distance in real sense can be infinitesimally close to each other p-adically.

See the chapter Criticality and dark matter" or the article Deconstruction and reconstruction in quantum physics and conscious experience



A new control mechanism of TGD inspired quantum biology

The idea that TGD Universe is quantum critical, is the key idea of quantum TGD and fixes the theory more or less uniquely since the only coupling constant parameter of the theory - Kähler coupling strength - is analogous to critical temperature. Also more than one basic parameters are in principle possible - maximal quantum criticality fixes the values of all of them - but it seems that only Kähler coupling strength is needed. TGD Universe is a quantum critical fractal: like a ball at the top of hill at the top of hill at.... Quantum criticality allows to avoid the fine tuning problems plaguing as a rule various unified theories.

Quantum criticality

The meaning of quantum criticality at the level of dynamics has become only gradually clearer. The development of several apparently independent ideas generated for about decade ago have led to the realization that quantum criticality is behind all of them. Behind quantum criticality are in turn number theoretic vision and strong forms of general coordinate invariance and holography.

  1. The hierarchy of Planck constants defining hierarchy of dark phases of ordinary matter corresponds to a hierarchy of quantum criticalities assignable to a fractal hierarchy of sub-algebras of super-symplectic algebra for which conformal weights are n-ples of those for the entire algebra, n corresponds to the value of effective Planck constant heff/h=n. These algebras are isomorphic to the full algebra and act as gauge conformal algebras so that a broken super-conformal invariance is in question.
  2. Quantum criticality in turn reduces to the number theoretic vision about strong form of holography. String world sheets carrying fermions and partonic 2-surfaces are the basic objects as far as pure quantum description is considered. Also space-time picture is needed in order to test the theory since quantum measurements always involve also the classical physics, which in TGD is an exact part of quantum theory.

    Space-time surfaces are continuations of collections of string world sheets and partonic 2-surfaces to preferred extremals of Kähler action for which Noether charges in the sub-algebra of super-symplectic algebra vanish. This condition is the counterpart for the reduction of the 2-D criticality to conformal invariance. This eliminates huge number of degrees of freedom and makes the strong form of holography possible.

  3. The hierarchy of algebraic extensions of rationals defines the values of the parameters characterizing the 2-surfaces, and one obtains a number theoretical realization of an evolutionary hierarchy. One can also algebraically continue the space-time surfaces to various number fields - reals and the algebraic extensions of p-adic number fields. Physics becomes adelic. p-Adic sectors serve as correlates for cognition and imagination. One can indeed have string world sheets and partonic 2-surfaces, which can be algebraically continued to preferred extremals in p-adic sectors by utilizing p-adic pseudo constants giving huge flexibility. If this is not possible in the real sector, figment of imagination is in question! It can also happen that only part of real space-time surface can be generated: this might relate to the fact that imaginations can be seen as partially realized motor actions and sensory perceptions.

Quantum criticality and TGD inspired quantum biology

In TGD inspired quantum biology quantum criticality is in crucial role. First some background.

  1. Quantum measurement theory as a theory of consciousness is formulated in zero energy ontology (ZEO) and defines an important aspect of quantum criticality. Strong form of NMP states that the negentropy gain in the state function reduction at either boundary of causal diamond (CD) is maximal. Weak form of NMP allows also quantum jumps for which negentropic entanglement is not generated: this makes possible ethics (good and evil) and morally responsible free will: good means basically increase of negentropy resources.
  2. Self corresponds to a sequence state function reductions to the same boundary of CD and heff does not change during that period. The increase of heff (and thus evolution!) tends to occur spontaneously, and can be assigned to the state function reduction to the opposite boundary of CD in zero energy ontology (ZEO). The reduction to the opposite boundary means death of self and living matter is fighting in order to avoid this even. To me the only manner to make sense about basic myth of Christianity is that death of self generates negentropy.
  3. Metabolism provides negentropy resources for self and hopefully prevents NMP to force the fatal reduction to the opposite boundary of CD. Also homeostasis does the same. In this process self makes possible evolution of sub-selves (mental images dying and re-incarnating) state function by state function reduction so that the negentropic resources of the Universe increase.

A new mechanism of quantum criticality

Consider now the mechanisms of quantum criticality. The TGD based model (see this) for the recent paradoxical looking finding (see this) that topological insulators can behave like conductors in external magnetic field led to a discovery of a highly interesting mechanism of criticality, which could play a key role in living matter.

  1. The key observation is that magnetic field is present. In TGD framework the obvious guess is that its flux tubes carry dark electrons giving rise to anomalous currents running in about million times longer time scales and with velocity, which is about million times higher than expected. Also supra-currents can be considered.

    The currents can be formed of the cyclotron energies of electrons are such that they correspond to energies near the surface of the Fermi sphere: recall that Fermi energy for electrons is determined by the density of conduction electrons and is about 1 eV. Interestingly, this energy is at the lower end of bio-photon energy spectrum. In the field of 10 Tesla the cyclotron energy of electron is .1 mV so that the integer characterizing cyclotron orbit must be n≅ 105 if conduction electron is to be transferred to the cyclotron orbit.

  2. The assumption is that external magnetic field is realized as flux tubes of fixed radius, which correspond to space-time quanta in TGD framework. As the intensity of magnetic field is varied, one observes so called de Haas-van Alphen effect used to deduce the shape of the Fermi sphere: magnetization and some other observables vary periodically as function of 1/B.

    This can be understood in the following manner. As B increases, cyclotron orbits contract. For certain increments of 1/B n+1:th orbit is contracted to n:th orbit so that the sets of the orbits are identical for the values of 1/B, which appear periodically. This causes the periodic oscillation of say magnetization.

  3. For some critical values of the magnetic field strength a new orbit emerges at the boundary of the flux tube. If the energy of this orbit is in the vicinity of Fermi surface, an electron can be transferred to the new orbit. This situation is clearly quantum critical.

    If the quantum criticality hypothesis holds true, heff/h=n dark electron phase can generated for the critical value of magnetic fields. This would give rise to the anomalous conductivity perhaps involving spin current due to the spontaneous magnetization of the dark electrons at the flux tube. Even super-conductivity based on the formation of parallel flux tube pairs with either opposite or parallel directions of the magnetic flux such that the members of the pair are at parallel flux tubes, can be considered and I have proposed this a mechanism of bio-superconductivity and also high Tc super-conductivity

A new mechanism of quantum criticality and bio-control

The quantum criticality of the process in which new electron orbit emerges near Fermi surface suggests a new mechanism of quantum bio-control by generation of super currents or its reversal.

  1. In TGD inspired quantum biology magnetic body uses biological body as motor instrument and sensory receptor and EEG and its fractal variants with dark photons with frequencies in EEG range but energy E=hefff in the range of bio-photon energies make the necessary signalling possible.
  2. Flux tubes can become braided and this makes possible quantum computation like processes. Also so called 2-braids - defined by knotted 2-surfaces imbedded in 4-D space-time surface - are possible for the string world sheets defined by flux tubes identified to be infinitely thin, are possible. As a matter fact, also genuine string world sheets accompany the flux tubes. 2-braids and knots are purely TGD based phenomenon and not possible in superstring theory or M-theory.
  3. It is natural to speak about motor actions of the magnetic body. It is assumed that the flux tubes of the magnetic body connect biomolecules to form a kind of Indra's web explaining the gel like character of living matter. heff reducing phase transitions contract flux tubes connecting biomolecules so that they can find each other by this process and bio-catalysis becomes possible. This explains the mysterious looking ability of bio-molecules to find each other in the dense molecular soup. In fact the dark matter part is far from being soup! The hierarchy of Planck constants and heff=hgr hypothesis imply that dark variants of various particles with magnetic moment are neatly at their own flux tubes like books in shelf.

    Reconnection of the U-shaped flux tubes emanating from two subsystems generates a flux tube pair between them and gives rise to supracurrents flowing between them. Also cyclotron radiation propagating along flux tubes and inducing resonant transitions is present. This would be the fundamental mechanism of attention.

  4. I have proposed that the variation of the thickness of the flux tubes could serve as a control mechanism since it induces a variation of cyclotron frequencies allowing to get in resonance or out of it. For instance, two molecules could get in flux tube contact when the cyclotron frequencies are identical and this can be achieved if they are able to vary their flux tube thickness. The molecules of immune system are masters in identifying alien molecules and the underlying mechanism could be based on cyclotron frequency spectrum and molecular attention. This would be also the mechanism behind water memory and homeopathy (see this), which still is regarded as a taboo by mainstreamers.
  5. Finally comes the promised new mechanism of bio-control! The variation of the magnetic field induced by that of flux tube thickness allows also to control whether there is quantum criticality for the generation of dark electron supra currents of electrons. The Fermi energy of the conduction electrons at the top of Fermi sphere is the key quantity and dictated by the density of these electrons. This allows to estimate the order of magnitude of the integers N characterizing cyclotron energy for ordinary Planck constant and the maximal value of heff/h=n cannot be larger than N.

See the chapter Criticality and Dark Matter or the article A new control mechanism of TGD inspired quantum biology

.



Does the physics of SmB6 make the fundamental dynamics of TGD directly visible?

The group of Suchitra Sebastian has discovered very unconventional condensed matter system which seems to be simultaneously both insulator and conductor of electricity in presence of magnetic field. Science article is entitled "Unconventional Fermi surface in an insulating state". There is also a popular article "Paradoxical Crystal Baffles Physicists" in Quanta Magazine summarizing the findings. I learned about the finding first from the blog posting of Lubos (I want to make absolutely clear that I do not share the racistic attitudes of Lubos towards Greeks. I find the discussions between Lubos and same minded blog visitor barbarians about the situation in Greece disgusting).

Observations

The crystal studied at superlow temperatures was Samarium hexaboride - briefly SmB6. The high resistance implies that electron cannot move more that one atom's width in any direction. Sebastian et al however observed electrons traversing over a distance of millions of atoms- a distance of orde 10-4 m, the size of a large neuron. So high mobility is expected only in conductors. SmB6 is neither metal or insulator or is both of them! The finding is described by Sebastian as a "big schock and as a "magnificent paradox by condensed matter theorists Jan Zaanen. Theoreticians have started to make guesses about what might be involved but according to Zaanen there is no even remotely credible hypothesis has appeared yet.

On basis of its electronic structure SmB6 should be a conductor of electricity and it indeed is at room temperature: the average number conduction electrons per SmB6 is one half. At low temperatures situation however changes. At low temperatures electrons behave collectivly. In superconductors resistance drops to zero as a consequence. In SmB6 just the opposite happens. Each Sm nucleus has the average 5.5 electrons bound to it at tight orbits. Below 223 degrees of Celsius the conduction electrons of SmB6 are thought to "hybridize" around samarium nuclei so that the system becomes an insulator. Various signatures demonstrate that SmB6 indeed behaves like an insulator.

During last five years it has been learned that SmB6 is not only an insulator but also so called topological insulator. The interior of SmB6 is insulator but the surface acts as a conductor. In their experiments Sebastian et al hoped to find additional evidence for the topological insulator property and attempted to measure quantum oscillations in the electrical resistance of their crystal sample. The variation of quantum oscillations as sample is rotated can be used to map out the Fermi surface of the crystal. No quantum oscillations were seen. The next step was to add magnetic field and just see whether something interesting happens and could save the project. Suddenly the expected signal was there! It was possible to detect quantum oscillations deep in the interior of the sample and map the Fermi surface! The electrons in the interior travelled 1 million times faster than the electrical resistance would suggest. Fermi surface was like that in copper, silver or gold. A further surprise was that the growth of the amplitude of quantum oscillations as temperature was decreased, was very different from the predictions of the universal Lifshitz-Kosevich formula for the conventional metals.

Could TGD help to understand the strange behavior of SmB6?

There are several indications that the paradoxical effect might reveal the underlying dynamics of quantum TGD. The mechanism of conduction must represent new physics and magnetic field must play a key role by making conductivity possible by somehow providing the "current wires". How? The TGD based answer is completely obvious: magnetic flux tubes.

One should also understand topological insulator property at deeper level, that is the conduction along the boundaries of topological insulator. One should understand why the current runs along 2-D surfaces. In fact, many exotic condensed matter systems are 2-dimensional in good approximation. In the models of integer and fractional quantum Hall effect electrons form a 2-D system with braid statistics possible only in 2-D system. High temperature super-conductivity is also an effectively 2-D phenomenon.One should also understand topological insulator property at deeper level, that is the conduction along the boundaries of topological insulator.

  1. Many-sheeted space-time is second fundamental prediction TGD. The dynamics of single sheet of many-sheeted space-time should be very simple by the strong form of holography implying effective 2-dimensionality. The standard model description of this dynamics masks this simplicity since the sheets of many-sheeted space-time are replaced with single region of slightly curved Minkowski space with gauge potentials sums of induced gauge potentials for sheets and deviation of metric from Minkowski metric by the sum of corresponding deviations for space-time sheets. Could the dynamics of exotic condensed matter systems give a glimpse about the dynamics of single sheet? Could topological insulator and anyonic systems provide examples of this kind of systems?
  2. Second basic prediction of TGD is strong form of holography: string world sheets and partonic 2-surfaces serve as kind of "space-time genes" and the dynamics of fermions is 2-D at fundamental level. It must be however made clear that at QFT limit the spinor fields of imbedding space replace these fundamental spinor fields localized at 2-surface. One might argue that the fundamental spinor fields do not make them directly visible in condensed matter physics. Nothing however prevents from asking whether in some circumstances the fundamental level could make itself visible.

    In particular, for large heff dark matter systems (, whose existence can be deduced from the quantum criticality of quantum TGD) the partonic 2-surfaces with CP2 size could be scaled up to nano-scopic and even longer size scales. I have proposed this kind of surfaces as carriers of electrons with non-standard value of heff in QHE and FQHE.

    The long range quantum fluctuations associated with large, heff=n× h phase would be quantum fluctuations rather than thermal ones. In the case of ordinary conductivity thermal energy makes it possible for electrons to jump between atoms and conductivity becomes very small at low temperatures. In the case of large scale quantum coherence just the opposite happens as observed. One therefore expects that Lifshitz-Kosevich formula for the temperature dependence of the amplitude does not hold true.

    The generalization of Lifschitz-Kosevich formula to quantum critical case deduced from quantum holographic correspondence by Hartnoll and Hofman might hold true qualitatively also for quantum criticality in TGD sense but one must be very cautious.

    The first guess is that by underlying super-conformal invariance scaling laws typical for critical systems hold true so that the dependence on temperature is via a power of dimensionless parameter x=T/mu;, where μ is chemical potential for electron system. As a matter fact, exponent of power of x appears and reduces to first power for Lifshitz-Konsevich formula. Since magnetic field is important, one also expects that the ratio of cyclotron energy scale Ec∝ ℏeff eB/me to Fermi energy appears in the formula. One can even make an order of magnitude guess for the value of heff/h≅ 106 from the facts that the scale of conduction and conduction velocity were millions times higher than expected.

    Strings are 1-D systems and strong form of holography implies that fermionic strings connecting partonic 2-surfaces and accompanied by magnetic flux tubes are fundamental. At light-like 3-surfaces fermion lines can give rise to braids. In TGD framework AdS/CFT correspondence generalizes since the conformal symmetries are extended. This is possible only in 4-D space-time and for the imbedding space H=M4× CP2 making possible to generalize twistor approach.

  3. Topological insulator property means from the perspective of modelling that the action reduces to a non-abelian Chern-Simons term. The quantum dynamics of TGD at space-time level is dictated by Kähler action. Space-time surfaces are preferred extremals of Kähler action and for them Kähler action reduces to Chern-Simons terms associated with the ends of space-time surface opposite boundaries of causal diamond and possibly to the 3-D light-like orbits of partonic 2-surfaces. Now the Chern-Simons term is Abelian but the induced gauge fields are non-Abelian. One might say that single sheeted physics resembles that of topological insulator.
  4. The effect appears only in magnetic field. I have been talking a lot about magnetic flux tubes carrying dark matter identified as large heff phases: topological quantization distinguishes TGD from Maxwell's theory: any system can be said to possess "magnetic body, whose flux tubes can serve as current wires. I have predicted the possibility of high temperature super-conductivity based on pairs of parallel magnetic flux tubes with the members of Cooper pairs at the neighboring flux tubes forming spin singlet or triplet depending on whether the fluxes are have same or opposite direction.

    Also spin and electric currents assignable to the analogs of spontaneously magnetized states at single flux tube are possible. The obvious guess is that the conductivity in question is along the flux tubes of the external magnetic field. Could this kind of conductivity explains the strange behavior of SmB6. The critical temperature would be that in which the parallel flux tubes are stable. The interaction energy of spin with the magnetic field serves as a possible criterion for the stability if the presence of dark electrons stabilizes the flux tubes.

The following represents an extremely childish attempt of a non-specialist to understand how the conductivity might be understood. The current carrying electrons at flux tubes near the top of Fermi surface are current carriers. heff=n×h and magnetic flux tubes as current wires bring in the new elements. Also in the standard situation one considers cylinder symmetric solutions of Schrödinger equation in external magnetic field and introduces maximal radius for the orbits so that formally the two situations seem to be rather near to each other. Physically the large heff and associated many-sheeted covering of space-time surface providing the current wire makes the situation different since the collisions of electrons could be absent in good approximation so that the velocity of charge carriers could be much higher than expected as experiments indeed demonstrate.

Quantum criticality is the crucial aspect and corresponds to the situation in which the magnetic field attains a value for which a new orbit emerges/disappears at the surface of the flux tube: in this situation dark electron phase with non-standard value of heff can be generated. This mechanism is expected to apply also in bio- superconductivity and to provide a general control tool for magnetic body.

  1. Let us assume that flux tubes cover the whole transversal area of the crystal and there is no overlap. Assume also that the total number of conduction electrons is fixed, and depending on the value of heff is shared differently between transversal and longitudinal degrees of freedom. Large value of heff squeezes the electrons from transversal to longitudinal flux tube degrees of freedom and gives rise to conductivity.
  2. Consider first Schrödinger equation. In radial direction one has harmonic oscillator and the orbits are Landau orbits. The cross sectional area behaves like πR2= nTheff/2mωc giving nT∝1/heff. Increase of the Planck constant scales up the radii of the orbits so that the number of states in cylinder of given radius is reduced.

    Angular momentum degeneracy implies that the number of transversal states is NT= nT2∝ 1/heff2. In longitudinal direction one has free motion in a box of length L with states labelled by integer nL. The number of states is given by the maximum value NL of nL.

  3. If the total number of states is fixed to N = NLNT is fixed and thus does not depend on heff, one has NL ∝ heff2. Quanta from transversal degrees of freedom are squeezed to longitudinal degrees of freedom, which makes possible conductivity.
  4. The conducting electrons are at the surface of the 1-D "Fermi-sphere", and the number of conduction electrons is Ncond≅ dN/dε × δ ε≅dN/dε T= NT/2εF ∝ 1/heff4. The dependence on heff does not favor too large values of heff. On the other hand, the scattering of electrons at flux tubes could be absent. The assumption L∝heff increases the range over which current can flow.
  5. To get a non-vanishing net current one must assume that only the electrons at the second end of the 1-D Fermi sphere are current carriers. The situation would resemble that in semiconductor. The direction of electric field would induce symmetry breaking at the level of quantum states. The situation would be like that for a mass in Earth's gravitational field treated quantally and electrons would accelerate freely. Schrödinger equation would give rise to Airy functions as its solution.

What about quantum oscillations in TGD framework?

  1. Quantum oscillation refers to de Haas-van Alphen effect - an oscillation of the induced magnetic moment as a function of 1/B with period τ= 2πe/ℏS, where S is the momentum space area of the extremal orbit of the Fermi surface, in the direction of the applied field. The effect is explained to be due to the Landau quantization of the electron energy. I failed to really understand the explanation of this source and in my humble opinion the following arguments provide a clearer view about what happens.
  2. If external magnetic field corresponds to flux tubes Fermi surface decomposes into cylinders parallel to the magnetic field since the motion in transversal degrees of freedom is along circles. In the above thought experiment also a quantization in the longitudinal direction occurs if the flux tube has finite length so that Fermi surface in longitudinal direction has finite length. One expects on basis of Uncertainty Principle that the area of the cross section in momentum space is given by S∝ heff2/πR2, where S is the cross sectional area of the flux tube. This follows also from the equation of motion of electron in magnetic field. As the external magnetic field B is increased, the radii of the orbits decrease inside the flux tube, and in momentum space the radii increase.

  3. Why does the induced magnetic moment (magnetization) and other observables oscillate?
    1. The simplest manner to understand this is to look at the situation at space-time level. Classical orbits are harmonic oscillator orbits in radial degree of freedom. Suppose that that the area of flux tube is fixed and B is increased. The orbits have radius rn2= (n+1/2) × hbar/eB and shrink. For certain field values the flux eBA =n×hbar corresponds to an integer multiple of the elementary flux quantum - a new orbit at the boundary of the flux tube emerges if the new orbit is near the boundary of Fermi sphere providing the electrons. This is clearly a critical situation.
    2. In de Haas- van Alphen effect the orbit n+1 for B has same radius as the orbit n for 1/B+Δ (1/B): rn+1(1/B) =rn(1/B+Δ (1/B)). This gives approximate differential equation with respect to n and one obtains (1/B)(n)= (n+1/2)× Δ (1/B) . Δ (1/B) is fixed from the condition the flux quantization. When largest orbit is at the surface of the flux, tube the orbits are same for B(n) and B(n+1), and this gives rise to the de Haas - van Alphen effect.
    3. It is not necessary to assume finite radius for the flux tube, and the exact value of the radius of the flux tube does not play an important role. The value of flux tube radius can be estimated from the ratio of the Fermi energy of electron to the cyclotron energy. Fermi energy about .1 eV depending only on the density of electrons in the lowest approximation and only very weakly on temperature. For a magnetic field of 1 Tesla cyclotron energy is .1 meV. The number of cylinders defined by orbits is about n=104.
  4. What happens in TGD Universe in which the areas of flux tubes identifiable as space-time quanta are finite? Could quantum criticality of the transition in which a new orbit emerges at the boundary of flux tube lead to a large heff dark electron phase at flux tubes giving rise to conduction?
    1. The above argument makes sense also in TGD Universe for the ordinary value of Planck constant. What about non-standard values of Planck constant? For heff/h =n the value of flux quantum is n-fold so that the period of the oscillation in de Haas - van Alphen effect becomes n times shorter. The values of the magnetic field for which the orbit is at the surface of the flux tube are however critical since new orbit emerges assuming that the cyclotron energy corresponds is near Fermi energy. This quantum criticality could give rise to a phase transition generating non-standard value of Planck constant.

      What about the period for Δ (1/B)? For heff/h=n? Modified flux quantization for extremal orbits implies that the area of flux quantum is scaled up by n. The flux changes by n units for the same increment of Δ (1/B) as for ordinary Planck constant so that de Haas -van Alphen effect does not detect the phase transition.

    2. If the size scale of the orbits is scaled up by n1/2 as the semiclassical formula suggests the number of classical orbits is reduced by a factor 1/n if the radius of the flux tube is not changed in the transition h→ heff to dark phase. n-sheetedness of the covering however compensates this reduction.
    3. What about possible values of heff/h? The total value of flux seems to give the upper bound of heff/h=nmax, where nmax is the value of magnetic flux for ordinary value of Planck constant. For electron and magnetic field for B=10 Tesla and has n≤ 105. This value is of the same order as the rough estimate from the length scale for which anomalous conduction occurs.
    Clearly, the mechanism leading to anomalously high conductivity might be the transformation of the flux tubes to dark ones so that they carry dark electrons currents. The observed effect would be dark, quantum critical variant of de Haas-van Alphen effect!

    Also bio-superconductivity is quantum critical phenomenon and this observation would suggests sharpening of the existing TGD based model of bio-super-conductivity. Super-conductivity would occur for critical magnetic fields for which largest cyclotron orbit is at the surface of the flux tube so that the system is quantum critical. Quantization of magnetic fluxes would quantify the quantum criticality. The variation of magnetic field strength would serve as control tool generating or eliminating supra currents. This conforms with the general vision about the role of dark magnetic fields in living matter.

To sum up, a breaktrough of TGD is taking place. I have written about thirty articles during this year - more than one article per week. There is huge garden there and trees contain fruits hanging low! It is very easy to pick them: just shatter and let them drop to the basket! New experimental anomalies having a nice explanation using TGD based concepts appear on weekly basis and the mathematical and physical understanding of TGD is taking place with great leaps. It is a pity that I must do all alone. I would like to share. I can only hope that colleagues could take the difficult step: admit what has happened and make a fresh start.

See the chapter Criticality and Dark Matter or the article Does the physics of SmB6 make the fundamental dynamics of TGD directly visible?



Discretization and quantum group description as different aspects of finite measurement resolution

In Thinking Allowed Original there was a link to a very interesting article with title "Designing Curved Blocks of Quantum Space-Time...Or how to build quantum geometry from curved tetrahedra in loop quantum gravity" telling about the work of Etera Livine working at LPENSL (I let the reader to learn what this means;-).

The idea of the article

The popular article mentions a highly interesting mathematical result relevant for TGD. The idea is to build 3-geometry - not by putting together flat tetrahedra or more general polyhedra along their boundaries - but by using curved hyperbolic tetrahedra or more generally polygons) defined in 3-D hyperbolic space - negative constant curvature space with Lorentz group acting as isometries - cosmic time=constant section of standard cosmology.

As a special case one obtains tesselation of 3-D hyperbolic space H3. This is somewhat trivial outcome so that one performs a "twisting". Some words about tesselations/lattices/crystals are in order first.

  1. In 2-D case you would glue triangles (say) together to get curved surface. For instance, at the surface of sphere you would get finite number of lattice like structures: the five Platonic solids tetrahdron, cube, octahedron, icosahedron, and dodecahedron, which are finite geometries assignable to finite fields corresponding to p=2, 3, and 5 and defining lowest approximaton of p-adic numbers for these primes.

  2. In 2-D hyperbolic plane H2 one obtains hyperbolic tilings used by Escher (see this).
  3. One can also consider decomposition of hyperbolic 3-space H3 to lattice like structure. Essentially a generalization of the ordinary crystallography from flat 3-space E3to H3. There are indications for the quantization of cosmic redshifts completely analogous to the quantization of positions of lattice cells, and my proposal is that they reflect the existing of hyperbolic crystal lattice in which astrophysical objects replace atoms. Macroscopic gravitational quantum coherence due to huge value of gravitational Planck constant could make them possible.

Back to the article and its message! The condition for tetrahedron property stating in flat case that the sum of the 4 normal vectors vanishes generalizes, and is formulated in group SU(2) rather than in group E3 (Euclidian 3-space). The popular article states that deformation of sum to product of SU(2) elements is equivalent with a condition defining classical q-deformation for gauge group. If this is true, a connection between "quantum quantum mechanics" and hyperbolic geometries therefore might exist and would correspond to a transition from flat E3 to hyperbolic H3.

Let loop gravity skeptic talk first

This looks amazing but it is better to remain skeptic since the work relates to loop quantum gravity and involves specific assumptions and different motivations.

  1. For instance, the hyperbolic geometry is motivated by the attempts to quantum geometry producing non-vanishing and negative cosmological constant by introducing it through fundamental quantization rules rather than as a physical prediction and using only algebraic conditions, which allow representation as a tetrahedron of hyperbolic space. This is alarming to me.
  2. In loop quantum gravity one tries to quantize discrete geometry. Braids are essential for quantum groups unless one wants to introduce them independently. In loop gravity one considers strings defining 1-D structures and the ordinary of points representing particles at string like entity might be imagined in this framework. I do not know enough loop gravity to decide whether this condition is realized in the framework motivating the article.
  3. In zero energy ontology hyperbolic geometry emerges in totally different manner. One wants only a discretization of geometry to represent classically finite measurement resolution and Lorentz invariance fixes it at the level of moduli space of CDs. At space-time level discretization would occur for the parameters charactering strings world sheets and partonic 2-surfaces defining "space-time genes" in strong form of holography.
  4. One possible reason to worry is that H3 allows infinite number of different lattice like structures (tesselations) with the analog of lattice cell defining hyperbolic manifold. Thus the decomposition would be highly non-unique and this poses practical problems if one wants to construct 3-geometries using polyhedron like objects as building bricks. The authors mention twisting: probably this is what would allow to get also other 3-geometries than 3-D hyperbolic space. Could this resolve the non-uniqueness problem?

    I understand (on basis of this) that hyperbolic tetrahedron can be regarded as a hyperbolic 3-manifold and gives rise to a tesselation of hyperbolic space. Note that in flat case tetrahedral crystal is not possible. In any case, there is an infinite number of this kind of decompositions defined by discrete subgroups G of Lorentz group and completely analogous to the decompositions of flat 3-space to lattice cells: now G replaces the discrete group of translations leaving lattice unaffected. An additional complication from the point of view of loop quantum gravity in the hyperbolic case is that the topology of the hyperbolic manifold defining lattice cell varies rather than being that of ball as in flat case (all Platonic solids are topologically balls).

The notion of finite measurement resolution

The notion of finite measurement resolution emerged first in TGD through the realization that von Neumann algebras known as hyper-finite factors of type I1 (perhaps also of type III1) emerge naturally in TGD framework. The spinors of "world of classical worlds" (WCW) identifiable in terms of fermionic Fock space provide a canonical realization for them.

The inclusions of hyperfinite factors provide a natural description of finite measurement resolution with included factor defining the sub-algebra, whose action generates states not distinguishable from the original ones. The inclusions are labelled by quantum phases coming as roots of unity and labelling also quantum groups. Hence the idea that quantum groups could allow to describe the quantal aspects of finite measurement resolution whereas discretization would define its classical aspects.

p-Adic sectors of TGD define a correlate for cognition in TGD Universe and cognitive resolution is forced by number theory. Indeed, one cannot define the notion of angle in p-adic context but one can define phases in terms of algebraic extensions of p-adic numbers defined by roots of unity: hence a finite cognitive resolution is unavoidable and might have a correlate also at the level of real physics.

The discrete algebraic extensions of rationals forming a cognitive and evolutionary hierarchy induce extensions of p-adic numbers appearing in corresponding adeles and for them quantum groups should be a necessary ingredient of description. The following arguments support this view and make it more concrete.

Quantum groups and discretization as two manners to describe finite measurement resolution in TGD framework

What about quantum groups in TGD framework? I have also proposed that q-deformations could represent finite measurement resolution. There might be a connection between discretizing and quantum groups as different aspects of finite measurement resolution. For instance, quantum group SU(2)q allows only a finite number of representations (maximum value for angular momentum): this conforms with finite angular resolution implying a discretization in angle variable. At the level of p-adic number fields the discretization of phases exp(iφ) as roots Un=exp(i2π/n) of unity is unavoidable for number theoretical reasons and makes possible discrete Fourier analysis for algebraic extension.

There exist actually a much stronger hint that discretization and quantum groups related to each other. This hint leads actually to a concrete proposal how discretization is described in terms of quantum group concept.

  1. In TGD discretization for space-time surface is not by a discrete set of points but by a complex of 2-D surfaces consisting of strings world sheets and partonic 2-surface. By their 2-dimensionality these 2-surfaces make possible braid statistics. This leads to what I have called "quantum quantum physics" as the permutation group defining the statistics is replaced with braid group defining its infinite covering. Already fermion statistics replaces this group with its double covering. If braids are present there is no need for "quantum quantum". If one forgets the humble braidy origins of the notion begins to talk about quantum groups as independent concept the attribute "quantum quantum" becomes natural. Personally I am skeptic about this approach: it has not yielded anything hitherto.
  2. Braiding means that the R-matrix characterizing what happens in the permutation of nearby particles is not anymore multiplication by +1 or -1 but more complex operation realized as a gauge group action (no real change to change by gauge invariance). The gauge group action could in electroweak gauge group for instance.

    What is so nice that something very closely resembling the action of quantum variant of gauge group (say electroweak gauge group) emerges. If the discretization is by the orbit of discrete subgroup H of SL(2,C) defining hyperbolic manifold SL(2,C)/H as the analog of lattice cell, the action of the discrete subgroup H is leaves "lattice cell" invariant but could induce gauge action on state. R-matrix defining quantum group representation would define the action of braiding as a discrete group element in H. Yang-Baxter equations would give a constraint on the representation.

    This description looks especially natural in the p-adic sectors of TGD. Discretization of both ordinary and hyperbolic angles is unavoidable in p-adic sectors since only the phases, which are roots of unity exist (p-adically angle is a non-existing notion): there is always a cutoff involved: only phases Um=exp(i2π/m), m<r exist and r should be a factor of the integer defining the value of Planck constant heff/h=n defining the dimension of the algebraic extension of rational numbers used. In the same manner hyperbolic "phases" defined by roots e1/mp of e (the very deep number theoretical fact is that e is algebraic number (p:th root) p-adically since ep is ordinary p-adic number!). The test for this conjecture is easy: check whether the reduction of representations of groups yields direct sums of representations of corresponding quantum groups.

  3. In TGD framework H3 is identified as light-cone proper time=constant surface, which is 3-D hyperboloid in 4-D Minkowski space (necessary in zero energy ontology). Under some additional conditions a discrete subgroup G of SL(2,C) defines the tesselation of H3 representing finite measurement resolution. Tesselation consists of a discrete set of cosets gSL(2,C). The right action of SL(2,C) on cosets would define the analog of gauge action and appear in the definition of R-matrix.

    The original belief was that discretization would have continuous representation and powerful quantum analog of Lie algebra would become available. It is not however clear whether this is really possible or whether this is needed since the R-matrix would be defined by a map of braid group to the subgroup of Lorentz group or gauge group. The parameters defining the q-deformation are determined by the algebraic extension and it is quite possible that there are more than one parameters.

  4. The relation to integrable quantum field theories in M2 is interesting. Particles are characterized by Lorentz boosts in SO(1,1) defining their 2-momenta besides discrete quantum numbers. The scattering reduces to a permutation of quantum numbers plus phase shifts. By 2-particle irreducibility defining the integrability the scattering matrix reduces to 2-particle S-matrix depending on the boost parameters of particles, and clearly generalizes the R-matrix as a physical permutation of particles having no momentum. Could this generalize to 4-D context? Could one speak of the analog of this 2-particle S-matrix as having discrete Lorentz boosts hi in sub-group H as arguments and representable as element h( h121 h2-1 trivial?
  5. The popular article says that one has q>1 in loop gravity. As found, in TGD quantum deformation has at least two parameters are needed in the case of SL(2,C). The first corresponds to the n:th root of unity (Un= exp(i2π/n)) and second one to n×p:th root of ep. One could do without quantum group but it would provide an elegant representation of discrete coset spaces. It could be also powerful tool as one considers algebraic extensions of rationals and the extensions of p-adic numbers induced by them.

    One can even consider a concrete prediction follows for the unit of quantized cosmic redhifts if astrophysical objects form tesselations of H3 in cosmic scales. The basic unit appearing in the exponent defining the Lorentz boost would depend on the algebraic extension invlved and of p-adic prime defining effective p-adcity and would be e1/np.

For details see the chapter Was von Neumann right after all? or the article Discretization and quantum group description as different aspects of finite measurement resolution.



Two kinds of negentropic entanglements

The most general view is that negentropic entanglement NE corresponds to algebraic entanglement with entanglement coefficients in some algebraic extension of rationals. The condition that the outcome of state function reduction is eigenspace of density matrix fixes the density matrix of the final state to be a projector with identical eigenvalues defining the probabilities of various states.

But what if the eigenvalues and thus also eigenvectors of the density matrix, which are algebraic numbers, do not belong to the algebraic extensions involved. Can state function reduction reduction occur at all so that this kind of NE would be stable?

The following argument suggests that also more general algebraic entanglement could be reasonably stable against NMP, namely the entanglement for which the eigenvalues of the density matrix and eigenvectors are outside the algebraic extension associated with the parameters characterizing string world sheets and partonic 2-surfaces as space-time genes.

The restriction to a particular extension of rationals - a central piece of the number theoretical vision about quantum TGD - implies that density matrix need not allow diagonalization. In eigen state basis one would have has algebraic extension defined by the characteristic polynomial of the density matrix and its roots define the needed extension which could be quite well larger than the original extension. This would make state stable against state function reduction.

If this entanglement is algebraic, one can assign to it a negative number theoretic entropy. This negentropic entanglement is stable against NMP unless the algebraic extension associated with the parameters characterizing the parameters of string world sheets and partonic surfaces defining space-time genes is allowed to become larger in a state function reduction to the opposite boundary of CD generating re-incarnated self and producing eigenstates involving algebraic numbers in a larger algebraic extension of rationals. Could this kind of extension be an eureka experience meaning a step forwards in cognitive evolution?

If this picture makes sense, one would have both the unitary NE with a density matrix, which is projector and the algebraic NE with eigen values and NE for which the eigenstates of density matrix outside the algebraic extension associated with the space-time genes. Note that the unitary entanglement is "meditative" in the sense that any state basis is possible and therefore in this state of consciousness it is not possible to make distinctions. This strongly brings in mind koans of Zen buddhism and enlightment experience. The more general irreducible algebraic entanglement could represent abstractions as rules in which the state pairs in the superposition represent the various instances of the rule.

For details see the chapter Negentropy Maximization Principle or the article Impressions created by TSC2015 conference.



Quantitative model of high Tc super-conductivity and bio-super-conductivity

I have developed already earlier a rough model for high Tc super conductivity. The members of Cooper pairs are assigned with parallel flux tubes carrying fluxes which have either same or opposite directions. The essential element of the model is hierarchy of Planck constants defining a hierarchy of dark matters.

  1. In the case of ordinary high Tc super-conductivity bound states of charge carriers at parallel short flux tubes become stable as spin-spin interaction energy becomes higher than thermal energy.

    The transition to super-conductivity is known to occur in two steps: as if two competing mechanisms were at work. A possible interpretation is that at higher critical temperature Cooper pairs become stable but that the flux tubes are stable only below rather short scale: perhaps because the spin-flux interaction energy for current carriers is below thermal energy. At the lower critical temperature the stability would is achieved and supra-currents can flow in long length scales.

  2. The phase transition to super-conductivity is analogous to a percolation process in which flux tube pairs fuse by a reconnection to form longer super-conducting pairs at the lower critical temperature. This requires that flux tubes carry anti-parallel fluxes: this is in accordance with the anti-ferro-magnetic character of high Tc super conductivity. The stability of flux tubes very probably correlates with the stability of Cooper pairs: coherence length could dictate the typical length of the flux tube.
  3. A non-standard value of heff for the current carrying magnetic flux tubes is necessary since otherwise the interaction energy of spin with the magnetic field associated with the flux tube is much below the thermal energy.
There are two energies involved.
  1. The spin-spin-interaction energy should give rise to the formation of Cooper pairs with members at parallel flux tubes at higher critical temperature. Both spin triplet and spin singlet pairs are possible and also their mixture is possible.
  2. The interaction energy of spins with magnetic fluxes, which can be parallel or antiparallel contributes also to the gap energy of Cooper pair and gives rise to mixing of spin singlet and spin triplet. In TGD based model of quantum biology antiparallel fluxes are of special importance since U-shaped flux tubes serve as kind of tentacles allow magnetic bodies form pairs of antiparallel flux tubes connecting them and carrying supra-currents. The possibility of parallel fluxes suggests that also ferro-magnetic systems could allow super-conductivity.

    One can wonder whether the interaction of spins with magnetic field of flux tube could give rise to a dark magnetization and generate analogs of spin currents known to be coherent in long length scales and used for this reason in spintronics (see this). One can also ask whether the spin current carrying flux tubes could become stable at the lower critical temperature and make super-conductivity possible via the formation of Cooper pairs. This option does not seem to be realistic.

In the article Quantitative model of high Tc super-conductivity and bio-super-conductivity the earlier flux tube model for high Tc super-conductivity and bio-super-conductivity is formulated in more precise manner. The model leads to highly non-trivial and testable predictions.
  1. Also in the case of ordinary high Tc super-conductivity large value of heff=n× h is required.
  2. In the case of high Tc super-conductivity two kinds of Cooper pairs, which belong to spin triplet representation in good approximation, are predicted. The average spin of the states vanishes for antiparallel flux tubes. Also super-conductivity associated with parallel flux tubes is predicted and could mean that ferromagnetic systems could become super-conducting.
  3. One ends up to the prediction that there should be a third critical temperature not lower than T**= 2T*/3, where T* is the higher critical temperature at which Cooper pairs identifiable as mixtures of Sz=+/- 1 pairs emerge. At the lower temperature Sz=0 states, which are mixtures of spin triplet and spin singlet state emerge. At temperature Tc the flux tubes carrying the two kinds of pairs become thermally stable by a percolation type process involving re-connection of U-shaped flux tubes to longer flux tube pairs and supra-currents can run in long length scales.
  4. The model applies also in TGD inspired model of living matter. Now however the ratio of critical temperatures for the phase transition in which long flux tubes stabilize is roughly by a factor 1/50 lower than that in which stable Cooper pairs emerge and corresponds to thermal energy at physiological temperatures which corresponds also the cell membrane potential. The higher energy corresponds to the scale of bio-photon energies (visible and UV range).

For details see the chapter Super-Conductivity in Many-Sheeted Space-Time or the article Quantitative model of high Tc super-conductivity and bio-super-conductivity.



Updated Negentropy Maximization Principle

Quantum TGD involves "holy trinity" of time developments. There is the geometric time development dictated by the preferred extremal of Kähler action crucial for the realization of General Coordinate Invariance and analogous to Bohr orbit. There is what I originally called unitary "time development" U: Ψi→ UΨi→ Ψf, associated with each quantum jump. This would be the counterpart of the Schrödinger time evolution U(-t,t→ ∞). Quantum jump sequence itself defines what might be called subjective time development.

Concerning U, there is certainly no actual Schrödinger equation involved: situation is in practice same also in quantum field theories. It is now clear that in Zero Energy Ontology (ZEO) U can be actually identified as a sequence of basic steps such that single step involves a unitary evolution inducing delocalization in the moduli space of causal diamonds CDs) followed by a localization in this moduli space selecting from a superposition of CDs single CD. This sequence replaces a sequence of repeated state function reductions leaving state invariant in ordinary QM. Now it leaves in variant second boundary of CD (to be called passive boundary) and also the parts of zero energy states at this boundary. There is now a very attractive vision about the construction of transition amplitudes for a given CD, and it remains to be see whether it allows an extension so that also transitions involving change of the CD moduli characterizing the non-fixed boundary of CD.

A dynamical principle governing subjective time evolution should exist and explain state function reduction with the characteristic one-one correlation between macroscopic measurement variables and quantum degrees of freedom and state preparation process. Negentropy Maximization Principle is the candidate for this principle. In its recent form it brings in only a single little but overall important modification: state function reductions occurs also now to an eigen-space of projector but the projector can now have dimension which is larger than one. Self has free will to choose beides the maximal possible dimension for this sub-space also lower dimension so that one can speak of weak form of NMP so that negentropy gain can be also below the maximal possible: we do not live in the best possible world. Second important ingredient is the notion of negentropic entanglement relying on p-adic norm.

The evolution of ideas related to NMP has been slow and tortuous process characterized by misinterpretations, over-generalizations, and unnecessarily strong assumptions, and has been basically evolution of ideas related to the anatomy of quantum jump and of quantum TGD itself.

Quantum measurement theory is generalized to theory of consciousness in TGD framework by replacing the notion of observer as outsider of the physical world with the notion of self. Hence it is not surprising that several new key notions are involved.

  1. ZEO is in central role and brings in a completely new element: the arrow of time changes in the counterpart of standard quantum jump involving the change of the passive boundary of CD to active and vice versa. In living matter the changes of the of time are inn central role: for instance, motor action as volitional action involves it at some level of self hierarchy.
  2. The fusion of real physics and various p-adic physics identified as physics of cognition to single adelic physics is second key element. The notion of intersection of real and p-adic worlds (intersection of sensory and cognitive worlds) is central and corresponds in recent view about TGD to string world sheets and partonic 2-surfaces whose parameters are in an algebraic extension of rationals. By strong form of of holography it is possible to continue the string world sheets and partonic 2-surfaces to various real and p-adic surfaces so that what can be said about quantum physics is coded by them. The physics in algebraic extension can be continued to real and various p-adic sectors by algebraic continuation meaning continuation of various parameters appearing in the amplitudes to reals and various p-adics.

    An entire hierarchy of physics labeled by the extensions of rationals inducing also those of p-adic numbers is predicted and evolution corresponds to the increase of the complexity of these extensions. Fermions defining correlates of Boolean cognition can be said so reside at these 2-dimensional surfaces emerging from strong form of holography implied by strong form of general coordinate invariance (GCI).

    An important outcome of adelic physics is the notion of number theoretic entanglement entropy: in the defining formula for Shannon entropy logarithm of probability is replaced with that of p-adic norm of probability and one assumes that the p-adic prime is that which produces minimum entropy. What is new that the minimum entropy is negative and one can speak of negentropic entanglement (NE). Consistency with standard measurement theory allows only NE for which density matrix is n-dimensional projector.

  3. Strong form of NMP states that state function reduction corresponds to maximal negentropy gain. NE is stable under strong NMP and it even favors its generation. Strong form of NMP would mean that we live in the best possible world, which does not seem to be the case. The weak form of NMP allows self to choose whether it performs state function reduction yielding the maximum possible negentropy gain. If n-dimensional projector corresponds to the maximal negentropy gain, also reductions to sub-spaces with n-k-dimensional projectors down to 1-dimensional projector are possible. Weak form has powerful implications: for instance, one can understand how primes near powers of prime are selected in evolution identified at basic level as increase of the complexity of algebraic extension of rationals defining the intersection of realities and p-adicities.
  4. NMP gives rise to evolution. NE defines information resources, which I have called Akashic records - kind of Universal library. The simplest possibility is that under the repeated sequence of state function reductions at fixed boundary of CD NE at that boundary becomes conscious and gives rise to experiences with positive emotional coloring: experience of love, compassion, understanding, etc... One cannot exclude the possibility that NE generates a conscious experience only via the analog of interaction free measurement but this option looks un-necessary in the recent formulation.
  5. Dark matter hierarchy labelled by the values of Planck constant heff=n× h is also in central role and interpreted as a hierarchy of criticalities in which sub-algebra of super-symplectic algebra having structure of conformal algebra allows sub-algebra acting as gauge conformal algebra and having conformal weights coming as n-ples of those for the entire algebra. The phase transition increasing heff reduces criticality and takes place spontaneously. This implies a spontaneous generation of macroscopic quantum phases interpreted in terms of dark matter. The hierarchies of conformal symmetry breakings with n(i) dividing n(i+1) define sequences of inclusions of HFFs and the conformal sub-algebra acting as gauge algebra could be interpreted in terms of measurement resolution.

    n-dimensional NE is assigned with heff=n× h and is interpreted in terms of the n-fold degeneracy of the conformal gauge equivalence classes of space-time surfaces connecting two fixed 3-surfaces at the opposite boundaries of CD: this reflects the non-determinism accompanying quantum criticality. NE would be between two dark matter system with same heff and could be assigned to the pairs formed by the n sheets. This identification is important but not well enough understood yet. The assumption that p-adic primes p divide n gives deep connections between the notion of preferred p-adic prime, negentropic entanglement, hierarchy of Planck constants, and hyper-finite factors of type II1.

  6. Quantum classical correspondence (QCC) is an important constraint in ordinary measurement theory. In TGD QCC is coded by the strong form of holography assigning to the quantum states assigned to the string world sheets and partonic 2-surfaces represented in terms of super-symplectic Yangian algebra space-time surfaces as preferred extremals of Kähler action, which by quantum criticality have vanishing super-symplectic Noether charges in the sub-algebra characterized by integer n. Zero modes, which by definition do not contribute to the metric of "world of classical worlds" (WCW) code for non-fluctuacting classical degrees of freedom correlating with the quantal ones. One can speak about entanglement between quantum and classical degrees of freedom since the quantum numbers of fermions make themselves visible in the boundary conditions for string world sheets and their also in the structure of space-time surfaces.
NMP has a wide range of important implications.
  1. In particular, one must give up the standard view about second law and replace it with NMP taking into account the hierarchy of CDs assigned with ZEO and dark matter hierarchy labelled by the values of Planck constants, as well as the effects due to NE. The breaking of second law in standard sense is expected to take place and be crucial for the understanding of evolution.
  2. Self hierarchy having the hierarchy of CDs as imbedding space correlate leads naturally to a description of the contents of consciousness analogous to thermodynamics except that the entropy is replaced with negentropy.
  3. In the case of living matter NMP allows to understand the origin of metabolism. NMP demands that self generates somehow negentropy: otherwise a state function reduction to tjhe opposite boundary of CD takes place and means death and re-incarnation of self. Metabolism as gathering of nutrients, which by definition carry NE is the manner to avoid this fate. This leads to a vision about the role of NE in the generation of sensory qualia and a connection with metabolism. Metabolites would carry NE and each metabolite would correspond to a particular qualia (not only energy but also other quantum numbers would correspond to metabolites). That primary qualia would be associated with nutrient flow is not actually surprising!
  4. NE leads to a vision about cognition. Negentropically entangled state consisting of a superposition of pairs can be interpreted as a conscious abstraction or rule: negentropically entangled Schrödinger cat knows that it is better to keep the bottle closed.
  5. NMP implies continual generation of NE. One might refer to this ever expanding universal library as "Akaschic records". NE could be experienced directly during the repeated state function reductions to the passive boundary of CD - that is during the life cycle of sub-self defining the mental image. Another, less feasible option is that interaction free measurement is required to assign to NE conscious experience. As mentioned, qualia characterizing the metabolite carrying the NE could characterize this conscious experience.
  6. A connection with fuzzy qubits and quantum groups with NE is highly suggestive. The implications are highly non-trivial also for quantum computation allowed by weak form of NMP since NE is by definition stable and lasts the lifetime of self in question.
For details see the chapter Negentropy Maximization Principle.



Individual nucleons inside nuclei do not behave according to predictions

Quantum TGD involves "holy trinity" of time developments. There is the geometric time development dictated by the preferred extremal of Kähler action crucial for the realization of General Coordinate Invariance and analogous to Bohr orbit. There is what I originally called unitary "time development" U: Ψi→ UΨi→ Ψf, associated with each quantum jump. This would be the counterpart of the Schrödinger time evolution U(-t,t→ ∞). Quantum jump sequence itself defines what might be called subjective time development.

Concerning U, there is certainly no actual Schrödinger equation involved: situation is in practice same also in quantum field theories. It is now clear that in Zero Energy Ontology (ZEO) U can be actually identified as a sequence of basic steps such that single step involves a unitary evolution inducing delocalization in the moduli space of causal diamonds CDs) followed by a localization in this moduli space selecting from a superposition of CDs single CD. This sequence replaces a sequence of repeated state function reductions leaving state invariant in ordinary QM. Now it leaves in variant second boundary of CD (to be called passive boundary) and also the parts of zero energy states at this boundary. There is now a very attractive vision about the construction of transition amplitudes for a given CD, and it remains to be see whether it allows an extension so that also transitions involving change of the CD moduli characterizing the non-fixed boundary of CD.

A dynamical principle governing subjective time evolution should exist and explain state function reduction with the characteristic one-one correlation between macroscopic measurement variables and quantum degrees of freedom and state preparation process. Negentropy Maximization Principle is the candidate for this principle. In its recent form it brings in only a single little but overall important modification: state function reductions occurs also now to an eigen-space of projector but the projector can now have dimension which is larger than one. Self has free will to choose beides the maximal possible dimension for this sub-space also lower dimension so that one can speak of weak form of NMP so that negentropy gain can be also below the maximal possible: we do not live in the best possible world. Second important ingredient is the notion of negentropic entanglement relying on p-adic norm.

The evolution of ideas related to NMP has been slow and tortuous process characterized by misinterpretations, over-generalizations, and unnecessarily strong assumptions, and has been basically evolution of ideas related to the anatomy of quantum jump and of quantum TGD itself.

Quantum measurement theory is generalized to theory of consciousness in TGD framework by replacing the notion of observer as outsider of the physical world with the notion of self. Hence it is not surprising that several new key notions are involved.

  1. ZEO is in central role and brings in a completely new element: the arrow of time changes in the counterpart of standard quantum jump involving the change of the passive boundary of CD to active and vice versa. In living matter the changes of the of time are inn central role: for instance, motor action as volitional action involves it at some level of self hierarchy.
  2. The fusion of real physics and various p-adic physics identified as physics of cognition to single adelic physics is second key element. The notion of intersection of real and p-adic worlds (intersection of sensory and cognitive worlds) is central and corresponds in recent view about TGD to string world sheets and partonic 2-surfaces whose parameters are in an algebraic extension of rationals. By strong form of of holography it is possible to continue the string world sheets and partonic 2-surfaces to various real and p-adic surfaces so that what can be said about quantum physics is coded by them. The physics in algebraic extension can be continued to real and various p-adic sectors by algebraic continuation meaning continuation of various parameters appearing in the amplitudes to reals and various p-adics.

    An entire hierarchy of physics labeled by the extensions of rationals inducing also those of p-adic numbers is predicted and evolution corresponds to the increase of the complexity of these extensions. Fermions defining correlates of Boolean cognition can be said so reside at these 2-dimensional surfaces emerging from strong form of holography implied by strong form of general coordinate invariance (GCI).

    An important outcome of adelic physics is the notion of number theoretic entanglement entropy: in the defining formula for Shannon entropy logarithm of probability is replaced with that of p-adic norm of probability and one assumes that the p-adic prime is that which produces minimum entropy. What is new that the minimum entropy is negative and one can speak of negentropic entanglement (NE). Consistency with standard measurement theory allows only NE for which density matrix is n-dimensional projector.

  3. Strong form of NMP states that state function reduction corresponds to maximal negentropy gain. NE is stable under strong NMP and it even favors its generation. Strong form of NMP would mean that we live in the best possible world, which does not seem to be the case. The weak form of NMP allows self to choose whether it performs state function reduction yielding the maximum possible negentropy gain. If n-dimensional projector corresponds to the maximal negentropy gain, also reductions to sub-spaces with n-k-dimensional projectors down to 1-dimensional projector are possible. Weak form has powerful implications: for instance, one can understand how primes near powers of prime are selected in evolution identified at basic level as increase of the complexity of algebraic extension of rationals defining the intersection of realities and p-adicities.
  4. NMP gives rise to evolution. NE defines information resources, which I have called Akashic records - kind of Universal library. The simplest possibility is that under the repeated sequence of state function reductions at fixed boundary of CD NE at that boundary becomes conscious and gives rise to experiences with positive emotional coloring: experience of love, compassion, understanding, etc... One cannot exclude the possibility that NE generates a conscious experience only via the analog of interaction free measurement but this option looks un-necessary in the recent formulation.
  5. Dark matter hierarchy labelled by the values of Planck constant heff=n× h is also in central role and interpreted as a hierarchy of criticalities in which sub-algebra of super-symplectic algebra having structure of conformal algebra allows sub-algebra acting as gauge conformal algebra and having conformal weights coming as n-ples of those for the entire algebra. The phase transition increasing heff reduces criticality and takes place spontaneously. This implies a spontaneous generation of macroscopic quantum phases interpreted in terms of dark matter. The hierarchies of conformal symmetry breakings with n(i) dividing n(i+1) define sequences of inclusions of HFFs and the conformal sub-algebra acting as gauge algebra could be interpreted in terms of measurement resolution.

    n-dimensional NE is assigned with heff=n× h and is interpreted in terms of the n-fold degeneracy of the conformal gauge equivalence classes of space-time surfaces connecting two fixed 3-surfaces at the opposite boundaries of CD: this reflects the non-determinism accompanying quantum criticality. NE would be between two dark matter system with same heff and could be assigned to the pairs formed by the n sheets. This identification is important but not well enough understood yet. The assumption that p-adic primes p divide n gives deep connections between the notion of preferred p-adic prime, negentropic entanglement, hierarchy of Planck constants, and hyper-finite factors of type II1.

  6. Quantum classical correspondence (QCC) is an important constraint in ordinary measurement theory. In TGD QCC is coded by the strong form of holography assigning to the quantum states assigned to the string world sheets and partonic 2-surfaces represented in terms of super-symplectic Yangian algebra space-time surfaces as preferred extremals of Kähler action, which by quantum criticality have vanishing super-symplectic Noether charges in the sub-algebra characterized by integer n. Zero modes, which by definition do not contribute to the metric of "world of classical worlds" (WCW) code for non-fluctuacting classical degrees of freedom correlating with the quantal ones. One can speak about entanglement between quantum and classical degrees of freedom since the quantum numbers of fermions make themselves visible in the boundary conditions for string world sheets and their also in the structure of space-time surfaces.
NMP has a wide range of important implications.
  1. In particular, one must give up the standard view about second law and replace it with NMP taking into account the hierarchy of CDs assigned with ZEO and dark matter hierarchy labelled by the values of Planck constants, as well as the effects due to NE. The breaking of second law in standard sense is expected to take place and be crucial for the understanding of evolution.
  2. Self hierarchy having the hierarchy of CDs as imbedding space correlate leads naturally to a description of the contents of consciousness analogous to thermodynamics except that the entropy is replaced with negentropy.
  3. In the case of living matter NMP allows to understand the origin of metabolism. NMP demands that self generates somehow negentropy: otherwise a state function reduction to tjhe opposite boundary of CD takes place and means death and re-incarnation of self. Metabolism as gathering of nutrients, which by definition carry NE is the manner to avoid this fate. This leads to a vision about the role of NE in the generation of sensory qualia and a connection with metabolism. Metabolites would carry NE and each metabolite would correspond to a particular qualia (not only energy but also other quantum numbers would correspond to metabolites). That primary qualia would be associated with nutrient flow is not actually surprising!
  4. NE leads to a vision about cognition. Negentropically entangled state consisting of a superposition of pairs can be interpreted as a conscious abstraction or rule: negentropically entangled Schrödinger cat knows that it is better to keep the bottle closed.
  5. NMP implies continual generation of NE. One might refer to this ever expanding universal library as "Akaschic records". NE could be experienced directly during the repeated state function reductions to the passive boundary of CD - that is during the life cycle of sub-self defining the mental image. Another, less feasible option is that interaction free measurement is required to assign to NE conscious experience. As mentioned, qualia characterizing the metabolite carrying the NE could characterize this conscious experience.
  6. A connection with fuzzy qubits and quantum groups with NE is highly suggestive. The implications are highly non-trivial also for quantum computation allowed by weak form of NMP since NE is by definition stable and lasts the lifetime of self in question.
For details see the chapter Negentropy Maximization Principle of "TGD Inspired Theory of Consciousness".



Individual nucleons inside nuclei do not behave according to predictions

Individual nucleons do not behave in nuclei as the existing theory predicts (see the popular article). This is a conclusion reached by an international team of scientists which has published their findings as article article in Phys. Rev. Letters).

I am not a nuclear physicists but have proposed what I call nuclear string model. Despite this I to try to understand what has been found and what nuclear string model can say about the findings.

Background and results

There are many models of atomic nuclei and each of them explains some aspects of nucleus. Nucleus can be modelled rigid body or as a kind of quantum liquid. In the prevailing average field approach the presence of other nucleons is described in terms of a potential function and calculates the states of individual nucleons in this potential using Schrödinger equation. It is essential that nucleons are assumed to be independent.

The model taking potential function to be that of harmonic oscillator is surprisingly successful but one must introduce corrections such as spin-orbit coupling in order to understand the energy spectrum. In this approach the notion of nuclear shell emerges. In atomic physics and chemistry the closed shells do not participate to the interaction and the outermost shell characterized by valence dictates to a higher degree the chemical properties of atom. Valence is positive if outer shell contains particles. Valence if negative if some of them are lacking. Something similar is to be expected also now. In this case full shells correspond to magic numbers for protons and neutrons separately (note that protons and neutrons seem to behave rather independently, something highly non-trivial!). The nuclei with valence +1 or -1 would correspond to almost magic nuclei.

One generally accepted correction to the harmonic oscillator model is inspired by the assumption that heavier nuclei can be described as a kind of blob of condensed matter obeying equation of state allowing to introduce notions like acoustic waves and surface waves. The nucleon at the unfilled shell would reside at the surface of this blob. The blob has vibrational excitations characterized by multipolarity (spherical harmonic characterized by angular momentum quantum numbers and the radial part of the oscillation amplitude. These excitations give rise to analogs of surface waves in water. Valence nucleons interact with the oscillations and affect the energy levels of the valence nucleons. The predictions of this model are calculable.

The team has studied almost doubly magic nuclei with valence equal to plus or -1 and calculated the effects on the energy levels of the nucleon and found that the observed effects are signficantly smaller than the predicted ones. This finding challenges both the mean field approach or the idea that nucleus can be idealized as a condensed matter like system or both.

Nuclear string model

In TGD framework ordinary model of nucleus is replaced with what I call nuclear string model. The basic ideas of the nuclear string model are following.

  1. Nuclei consist of string like objects: protons and neutrons connected by color magnetic flux tubes form string like objects, perhaps separately. The color magnetic flux tubes would would be meson-like objects and could even carry net color. They are either neutral (quark and antiquark at the ends of flux tube have opposite charges) or carry em charge.

    This predicts a large number of exotic states. The exotic states cannot be distinguished chemically from the isotopes of the nucleus. The energy scale of the excitations could be in keV range: in this case their existence could explain the annual variation of nuclear decay rates which seems to be caused by X rays from Sun. Second possibility is that the energy scale is in MeV range for nuclear energies.

    This would be new nuclear physics and perhaps relevant also to the cold fusion. The energy scale would derive from the string tension of the color magnetic flux tube. The lengths of the color magnetic flux tubes corresponding to keV scale would be rather long and correspond color magnetic bodies of the nucleons. If this is the case then the color magnetic energy of the system would depend only weakly on the positions of the nucleons of string inside nuclear volume. This assumption might allow to understand the anomalous finding that the charge radius of proton is smaller than predicted.

    The presence of long flux tubes might allow to understand the anomalous finding that the charge radius of proton is smaller than predicted. u and d quarks are known to be light and have masses in the range 5-20 MeV. The TGD based model for elementary particles (see this) suggests that quarks correspond to closed flux tubes consisting of two portions at parallel space- time sheets with ends connected by wormhole contacts and with monopole magnetic flux rotating in the tube. Uncertainty principle suggests that the length of the flux tube structure is of the order of Compton length of the quark. The constituents of proton would be larger than proton itself! The paradox disappears if the Compton length is assigned with the magnetic flux tube connecting the two wormhole contacts associated with quark and rather near to each other and much shorter than the flux tube.

    Flux tubes with Compton lengths corresponding to 10 keV photon energy would be however 3 orders of magnitude longer (10 nm). This could be due to the scaling by heff/h ≈ 103. These flux tubes could also correspond to the flux tubes connecting neighboring nucleons of nuclear strings. The dark magnetic flux tubes of this length associated with neighboring nuclei could reconnect and bind nuclei to form lattice like structures. This process and thus dark nuclear physics could play a key role in in the formation of condensed matter phases as it is proposed to play also in living matter.

  2. These strings of nucleons could topologically condense to larger magnetic flux tubes but could still touch also the nuclear spacetime sheet as suggested by the success of harmonic oscillator model. In biological length scales the assumption that effective Planck constant characterizing dark matter phase equals heff=n× h equals to gravitational Planck constant GMm/v0, where v0 is a parameter with dimensions of velocity, implies that cyclotron frequencies are universal (no dependence on particle mass m) but also implies that particles with different masses correspond to different value of effective Planck constant so that living system would perform spectroscopy putting particles (elementary particles,atoms, ions, molecules,..) neatly at different dark space-time sheets! If the nucleons inside nuclei are dark in this sense protons and neutrons would be at different flux tubes since their masses are slightly different.
  3. Nucleus could consist of several - possibly knotted - closed flux tubes containing some number of nucleons each. An attractive hypothesis is that these flux tubes correspond to nuclear shells so that full flux tubes would correspond to full shells and define separate units. In semiclassical approximation this would mean that nuclear string is localized at the surface of sphere.
  4. Nuclear string defines a closed non-intersecting curve going through the vertices of polyhedron with n vertices known as Hamilton cycle. If color magnetic flux tubes are long, it is convenient to consider a curve defined by line segments connecting the neighboring nucleons of the nuclear string.

    The notion of Hamilton cycle is well-defined for any graph so that it makes sense for any polyhedron. It is enough that the cycle is consistent with the underlying graph structure allowing to say which vertices are nearest neighbours (they need not be neighbours in the metric sense but only in the sense of homology that is ends of the same edge).

    In the case of Platonic solids the rotational symmetries preserving Platonic solid generate finite number of Hamilton cycles of same shape from a given one and it is natural to define Hamilton cycles as equivalence classes of cycles with same shape. For instance, for icosahedron one has 17 Hamilton cycles and for 11 cycles one has symmetry group Zn, n∈ 6,4,2 and the cycles obtained from them by rotations. In this case one can however say that independent particle approximation is given up and one considers equilibrium configurations analogous to those of molecules. Nuclear string however orders the nucleons and brings in additional information. Hamilton cycles make sense also for the deformations of icosahedron since it is only the homological nearness that matters. Note however that the allowed deformations of metric Hamilton cycle must be such that the edges do not intersect: in other words the deformation of nuclear string is not self intersecting.

  5. If the nucleons can perform only small oscillations around the vertices of a polyhedron, independent particle assumption fails badly. One would however have collective wave function for orientations of the polyhedron. In this case Platonic solids or their homological generalization define good candidates for full shells.

How does nuclear string model relate to the shell model?

In the mean field approximation particles move independently in a potential describing the effects of the other nucleons. The basis for N-nucleon wave functions can be constructed as products of those associated with individual nucleons. Under what conditions nuclear string model is consistent with independent particle approach?

  1. At classical level the independent motion of nucleons (along elliptic orbits in harmonic oscillator approximation) of the nuclear string would give rise to a rather complex motion of nuclear string leading to self intersections unless the flux tubes have much longer length scale than the nucleus. In this case nucleus would be like seed from which flux tubes would emerge like a plant and self intersections could be avoided but the motion of nucleons could induce local braiding of the strands emanating from nucleons. This is indeed what has been assumed. Note that the (say) U shaped flux tubes connecting neighboring nucleons could reconnect with the similar tubes associated with other nuclei so that the motions of nucleons would give rise to genuine global braiding.
  2. Harmonic oscillator states would induce wave function in the space of string configurations having interpretation as Hamilton cycles associated with polyhedron with N vertices whose positions can vary, also in the radial direction although semiclassical shell model would force particles at the same radius. TGD allows to consider a collective localization at spherical shells: this would be rather long range correlation but consistent with the spirit of shell model. A more general approximation would be the localization to a union of spherical shells associated with the maxima of radial wave function.
  3. In independent particle model basis wave functions are products. This is not consistent with the assumption that nucleons arrange to form a string unless the nearest neighbour nucleons at string can have arbitrary angular distance along the sphere: this would hold true exactly at the limit of vanishing string tension.

    The longer the angular distance, the higher the color magnetic energy of the string. This energy would give rise to correlations inducing the mixing of harmonic oscillator wave functions. This would be the minimal breaking of independent particle approximation and would describe possibly new kind of nuclear forces between neighboring nucleons of the nuclear string as color magnetic forces.

    If the color magnetic interaction corresponds to MeV scale, the length scale of the flux tubes is electron's Compton length and even in this case considerably longer than nuclear radius and independent particle approximation would not be badly broken. In this case the interpretation in terms of strong force might make sense. Even for the flux tubes which length of order Compton length for u and d quarks the flux tubes are much longer than the distance between nucleons.

    If the energy scale of exotic nuclei is 1-10 keV as the variation of the nuclear decay rates seemingly induced by the variations of X ray flux from Sun suggests, the color magnetic energy would be rather small and independent particle approximation would even better than in previous case. This is expected to be the case if the color magnetic flux tubes correspond to the length scale assignable to 1-10 keV scale and thus long so that the positions of nucleons inside nucleus do not matter. 10 keV scale would in fact correspond to photon wavelength about 1 Angstrom - size of atom - so that a new interaction between nuclear and atomic physics is predicted. Note that classical and quantal pictures are consistent with each other.

Semiclassical considerations

One can consider the situation also semi-classically.

  1. Nuclear shells correspond in the Bohr model based on harmonic oscillator potential to spheres with radii fixed by Bohr's quantization rules. Wave functions are indeed concentrated also around the classical radius but for principal quantum number n one obtains n +1 local maxima (see this). The wave function at given shell would be localized at n+1 surfaces rather than single surface, which is definitely a non-classical aspect. The probability density however concentrates mostly to the shell with the largest radius so that for large values of n the semiclassical approximation becomes better.

    One can of course ask, whether this picture contains deeper seed of truth expressible in terms of space-time topology. This would conform with the TGD based idea that matter resides on geometric shells: this idea is suggested already by the model for a final state of star predicting that mass is concentrated on shell. In many-sheeted space- time one expects an onion-like structure made of these shells.

    The TGD based proposal is that in solar system planets would be accompanied by this kind of dark matter shells with radii predicted approximately by Bohr rules. TGD based explanation for Pioneer and Flyby anomalies (see this) predicts the same surface density of dark matter at these shells as deduced for the effective surface density of dark matter in the case of galactic nucleus. Of course, nucleons inside nuclei cannot correspond to dark matter unless the value of heff/n=n is small. Otherwise the size of nucleus would be too large.

  2. In the semiclassical approximation the radii of the sphere at which the vertices of polyhedron are located would correspond to the radii of nuclear shells. An approximation in which one treats the angular degrees of freedom quantally using independent particle model and radial degree of freedom collectively looks reasonable and would allow to keep the rotational symmetries but would mean giving up the additional symmetries making if possible to solve harmonic oscillator model exactly. With this assumption nuclear strings would reside at spheres.

Could magic numbers be understood in terms of Platonic solids?

Harmonic oscillator model predicts the numbers of nucleons for magic nuclei as sums of numbers of nucleons for the full shells involved but the predictions are not quite correct. One can however modify the model to get the observed magic numbers. Could these numbers be consistent with the idea that a full shell corresponds to a Platonic solid such that closed nuclear string, which can connect only neighboring vertices goes through its vertices without intersecting itself?

  1. This kind of curves are known as Hamilton cycles and icosahedral and tetrahedral Platonic cycles are in a key role in TGD inspired vision about bio-harmony predicting that DNA sequences have interpretation as sequences of 3-chords of what I call bio-harmony realizing genetic code (see this).

    One can also consider replacing metric Platonic solid with combinatorial objects in which neighboring vertices are defined to be ends of the same edge which can be rather long. This option is consistent with independent particle model in angular degrees of freedom.

  2. If the polyhedron is Platonic solid (cube, octahedron, tetrahedron,icosahedron, dodecahedron) the number of nucleons at given shell would be equal the number of vertices of the Platonic solid. One can of course consider more complex scenarios. One could consider adding nucleons also to the centers of edges and faces and even superpose different Platonic solids associated with the same sphere. Same Platonic solid could also appear as scaled variants.
  3. One could consider building the nuclei by adding new spherical layers gradually and assuming that the nucleons are at the vertices (one could consider also putting them in the centers of the faces). The lowest magic numbers are 2,8,20,28,50,82,126,184 and are reproduced of shells have n=2,6,12,8,22,32,44,58. In standard approach one can say that each oscillator wave function corresponds to two spin directions so that the proper number to consider would be m=n/2. The values of m would be be m=1,3,6,4,11,16,22,29. For nuclear strings n is the correct number of nuclear strings are not allow to intersect themselves so that single point of string cannot contain two nucleons. Also protons and neutrons can be assumed to belong to different nuclear strings.

Could one understand the numbers n in terms of Platonic solids?

  1. n=2 would correspond to line segment with 2-vertices. n=6 would correspond to octahedron. n=12 would correspond to icosahedron. n=8 would correspond to cube. Note that tetrahedron, the only self-dual Platonic solid, predicting n=4 iand dodecahedron with n=20 are missing fron the list.
  2. After this the situation does not look simple: dodecahedron would predict n=20 instead of n=22=4+6+12. The interpretation in terms of a composite tetrahedron + icosahedron could be considered. Tetrahedron would contain nucleons both at its vertices (4) and edges (6) and icosahedron at its vertices. This looks however rather tricky and the general model does not of course predict Platonic solids.

These findings would suggest that the independent particle model is not a good approximation for light nuclei for which a model as a molecule like entity with rather rigid position of nucleons can be considered if Platonic solids are taken as metric objects.

The experimental findings from TGD point of view?

On basis of the experimental findings it is far from clear whether one can model nuclei as objects characterized by continuous nucleon densities and obeying some thermodynamical equation of state from which the dynamics describing the oscillations of nucleon densities can be deduced.

  1. Suppose that nuclear shells make in TGD framework sense also as geometric objects, that is as (say) spherical space-time sheets containing the nuclear string for which the nucleons at vertices behave independently in angular degrees of freedom. In this kind of model the approximation as condensed matter blob is not the thing that comes first into mind. It would be like modelling of solar system by replacing planets by introducing planet density and oscillations of this density.
  2. If the shell contains only single particle, the collective wave function for the radius of the sphere associated with shell co-incides with single particle wave function. In this case one cannot say that the particle is at the surface of nucleus.
  3. There is no direct interaction with the oscillations of the full shell in the lowest order since the shells correspond to different space-time sheets. The interaction is only in terms of potential functions assignable to the large space-time sheet.

See chapter Nuclear string model or the article Individual nucleons inside nuclei do not behave according to predictions.



Quantization of conductance in neutral matter as evidence for many-sheeted space-time?

We are living really interesting times. Experimental sciences are producing with accelerating pace new discoveries challenging the existing theories and it is difficult to avoid the impression that a revolution is going on in physics and also in biology and neuroscience. It is a pity that colleagues do not seem to even realize what is going on. Ulla's Facebook page Quantum Biology, coherence and decoherence contained this morning a link to and article published in Nature.

The article tells about quantized conductance in neutral matter. In quantum Hall effect conductances is quantized in multiples of e2/h. Now the however is in multiples of 1/h. Looks strange! This is due to the fact that voltage is not present now: particles are neutral and electric field is replaced with the gradient of chemical potential and electric current with particle current. Hence elementary charge e is replaced with the unit for particle number which is just 1 rather than e. Hence the quantisation as multiples of 1/h but in complete analogy with Quantum Hall Effect (QHE).

What comes to my innocent in mind is that the effect is mathematically like QHE and that there is also fractional variant of it as in the case of QHE. In QHE magnetic field and cyclotron states at flux quanta of this field are in key role. But in the situation considered they are not present if we live in the standard model world.

What is the situation in TGD?

  1. In many-sheeted space-time all classical electroweak fields are present as long range fields at given sheet. This has been one of they key interpretational problems of TGD from the beginning. In particular, Kähler electric and magnetic fields are always associated with non-vacuum extremals although ordinary electric field can vanish. Note that classical electro-weak fields affect the dynamics indirectly by forcing fermions to the string world sheets! They are clever power holders!
  2. This has inspired the hypothesis that induced spinor fields describing fundamental fermions are localized at string world sheets at which only em fields are non-vanishing. This assumption guarantees that electromagnetic charge is well-defined quantum number for the modes of spinor field and thus also conserved. Classical Z0 fields could be present below weak scale also at string world sheets. Weak scale is scaled up to macroscopic scale for large values of heff=n× h and this could explain the large parity breaking effects in living matter but also just the fact that fermionic fields are not where weak fields are, could explain the parity breaking effects in living matter.
  3. At GRT-gauge theory limit the sheets of many-sheeted space-time are replaced with single one and interpreted as region of Minkowski space slightly curved and carrying gauge fields: now space-time is not regarded as a surface anymore. Only classical em field effectively present above weak scale since other electroweak gauge potentials associated with space-time sheets sum up to something which is zero on average at GRT limit.
These observations lead to ask whether the quantization of conductivity for neutral particles be a direct signature of many-sheeted space-time? Could the experiments probe physics at single sheet of many sheeted space-time? Could the needed magnetic and electric fields correspond to classical Z0 fields, which can be present at string world sheets below weak scale now scaled up by heff/h.

If this approach is on the correct track then the thermodynamical description in terms of chemical potential cannot be fundamental (the gradient of the chemical potential replaces that of electric potential in this description). Leaving the realm of standard model, one could however wonder whether the thermodynamical description using chemical potentials (chemistry is by definition effective theory!) is really fundamental in quantum regime and whether it could reduce to something more fundamental which standard model can describe only phenomenologically.

  1. I have ended up with two alternative models of cell membrane in zero energy ontology as a generalisation of thermodynamics as square root of thermodynamics with probability densities interpreted as square roots of thermodynamical weights which are exponentials of thermal energies. These models can be also combined. Both are characterized by a large value of heff=hgr, where hgr is the gravitational Planck constant discussed in the series earlier postings about criticality and hierarchy of Planck constants (see this and earlier postings during November).
  2. In the first model of the cell membrane Josephson energy determined by the voltage over the cell membrane is generalized by adding to it the difference of cyclotron energies at flux tubes at the two different sides of the membrane and the magnetic fields at flux tubes appear in the formula. This difference of cyclotron energies corresponds to chemical potential and affects the frequency associated with the Josephson current and corresponding energy proportional to heff and therefore above thermal energy.
  3. For the second model classical Z0 fields explaining the large parity breaking effects in living matter are assumed to be present. Chemical potential corresponds to the difference of Z0 potential over the cell membrane. Could this phase be the phase in which "chemical" conductivity is quantized?
  4. For the hybrid of the two models the theory of QHE would generalize by replacing em fields with combinations of em and Z0 fields. This framework could be used to model also the observed quantization of neutral conductivity as an analog of QHE.

The most obvious objection that the quantum of conductivity for neutral particles is 1/h rather than g2/h, where g is appropriate weak coupling strength does not bite. Experimentalists measure particle currents rather than Z0 currents (j= jZ/gZ) and use gradient of chemical potential instead of Z0 potentials μ= gZEZ). jZ= σ EZ implies that the quantization of the conductance is in multiples of 1/h.

For details and references see the chapter Quantum Hall effect and hierarchy of Planck constants or the article Quantization of conductance in neutral matter as evidence for many-sheeted space-time?.



To the index page