What's new inHYPERFINITE FACTORS, PADIC LENGTH SCALE HYPOTHESIS, AND DARK MATTER HIERARCHYNote: Newest contributions are at the top! 
Year 2018 
BCS super conductivity at almost room temperature
Towards the end of year 2018 I learned about the discovery of BCS type (ordinary) superconductivity at temperature warmer than that at North Pole (see this). The compound in question was Lantanium hydride LaH_{10}. Mihail Eremets and his colleagues found that it BCSame superconducting at temperate 23 C and high pressure 170 GPa about 1.6 million times the atmospheric pressure (see this). The popular article proposed an intuitive explanation of BCS superconductivity, which was new to me and deserves to be summarized here. Cooper pairs would surf on sound waves. The position would correspond to a constant phase for the wave and the velocity of motion would be the phase velocity of the sound wave. The intensity of sound wave would be either maximum or minimum corresponding to a vanishing force on Cooper pair. One would have equilibrium position changing adiabatically, which would conform with the absence of dissipation. This picture would conform with the general TGD based vision inspired by Sheldrakes's findings and claims related to morphic resonance (see this) , and by the conjectured general properties of preferred extremals of the variational principle implied by twistor lift of TGD (see this). The experimental discovery is of course in flagrant conflict with the predictions of the BCS theory. As the popular article tells, before the work of Eremets et al the maximum critical temperature was thought to be something like 40 K corresponding to 233 C. The TGD based view is that Cooper pairs have members (electrons) at parallel flux tubes with opposite directions of magnetic flux and spin and have nonstandard value of Planck constant h_{eff}= n× h_{0}= n× h/6 (see this and this), which is higher than the ordinary value, so that Cooper pairs can be stable at higher temperatures. The flux tubes would have contacts with the atoms of the lattice so that they would experience the sound oscillations and electrons could surf at the flux tubes. The mechanism binding electrons to Cooper pair should be a variant of that in BCS model. The exchange of phonons generates an attractive interaction between electrons leading to the formation of the Cooper pair. The intuitive picture is that the electrons of the Cooper pair can be thought of lying on a mattress and creating a dip towards which the other electron tends to move. The interaction of the flux tubes with the lattice oscillations inducing magnetic oscillations should generate this kind of interaction between electrons at flux tubes and induce a formation of a Cooper pair. Isotope effect is the crucial test: the gap energy and therefore critical temperature are proportional the oscillation frequency ω_{D} of the lattice (Debye frequency) proportional to 1/M^{1/2} of the mass M of the molecule in question and decreases with the mass of the molecule. One has lantaniumhydroxide, and can use an isotope of hydrogen to reduce the Debye frequency. The gap energy was found to change in the expected manner. Can TGD inspired model explain the isotope effect and the anomalously high value of Debye energy? The naive order of magnitude estimate for the gap energy is of form E_{gap}= x× hbar_{eff}ω_{D}, x a numerical factor. The larger the value of h_{eff}= n× h_{0}= n× h/6, the larger the gap energy. Unless the high pressure increases ω_{D} dramatically, the critical temperature 250 K would require n/6∼ T_{cr}/T_{max}(BCS)∼ 250/40∼ 6. For this value the cyclotron energy E_{c}= h_{eff}f_{c} is much below thermal energy for magnetic fields even in Tesla range so that the binding energy must be due to the interaction with phonons. The high pressure is needed to keep lattice rigid enough at high temperatures so that indeed oscillates rather than "flowing". I do not see how this could prevent flux tube mechanism from working. Neither do I know, whether high pressure could somehow increase the value of Debye frequency to get the large value of critical temperature. Unfortunately, the high pressure (170 GPa) makes this kind of high Tc superconductors unpractical. See the chapter Quantum Criticality and dark matter or the article New findings related to high Tc superconductivity. 
Intelligent blackholes?
Thanks for Nikolina Benedikovic for kindly providing an interesting link and for arousing my curiosity. In the link one learns that Leonard Susskind has admitted that superstrings do not provide a theory of everything. This is actually not a mindblowing surprise since very few can claim that the news about the death of superstring theory would be premature. Congratulations in any case to Susskind: for a celebrated super string guru it requires courage to change one's mind publicly. I will not discuss in the following the tragic fate of superstrings. Life must continue despite the death of superstring theory and there are much more interesting ideas to consider. Susskind is promoting an idea about growing blackholes increasing their volume as the swallow matter around them (see this). The idea is that the volume of the blackhole measures the complexity of the blackhole and from this its not long way to the idea that information  may be conscious information (I must admit that I cannot imagine any other kind of information)  is in question. Some quantum information theorists find this idea attractive. Quantum information theoretic ideas find a natural place also in TGD. Magnetic flux tubes would naturally serving as spacetime correlates for entanglement (the padic variants of entanglement entropy can be negative and would serve as measures of conscious information) and this leads to the idea about tensor networks formed by the flux tubes (see this). So called strong form of holography states that 2D objects  string world sheets and partonic 2surfaces as submanifolds of spacetime surfaces carry the information about spacetime surface and quantum states. M^{8}M^{4} ×CP_{2} correspondence would realize quantum information theoretic ideas at even deeper level and would mean that discrete finite set of data would code for the given spacetime surface as preferred extremal. In TGD Universe long cosmic strings thickened to flux tubes would be key players in the formation of galaxies and would contain galaxies as tangles along them. These tangles would contain subtangles having interpretation as stars and even planets could be such tangles. I just wrote an article describing a model of quasars (see this) based on this idea. In this model quasars need not be blackholes in GRT sense but have structure including magnetic moment (blackhole has no hair), an empty disk around it created by the magnetic propeller effect caused by radial Lorentz force, a luminous ring and accretion disk, and so called Elvis structure involving outwards flow of matter. One could call them quasi blackholes  I will later explain why.

What does one really mean with gravitational Planck constant?
There are important questions related to the notion of gravitational Planck constant, to the identification of gravitational constant, and to the general structure of magnetic body. What gravitational Planck constant really is? What the formula for gravitational constant in terms of CP_{2} length defining Planck length in TGD does really mean, and is it realistic? What spacetime surface as covering space does really mean? What does one mean with spacetime as covering space? The central idea is that spacetime corresponds to nfold covering for h_{eff}=n× h_{0}. It is not however quite clear what this statement does mean.
There is also a puzzle related to the identification of gravitational Planck constant. In TGD framework the only theoretically reasonable identification of Planck length is as CP_{2} length R(CP_{2}), which is roughly 10^{3.5} times longer than Planck length. Otherwise one must introduce the usual Planck length as separate fundamental length. The proposal was that gravitational constant would be defined as G =R^{2}(CP_{2})/ℏ_{gr}, ℏ_{gr}≈ 10^{7}ℏ. The G indeed varies in unexpectedly wide limits and the fountain effect of superfluidity suggests that the variation can be surprisingly large. There are however problems.
If n_{2} corresponds to the order of finite subgroup G of SU(3) or number of elements in a coset space G/H of G (itself subgroup for normal subgroup H), one would have very limited number of values of n_{2}, and it might be possible to understand the fountain effect of superfluidity from the symmetries of CP_{2}, which would take a role similar to the symmetries associated with Platonic solids. In fact, the smaller value of G in fountain effect would suggest that n_{2} in this case is larger than for G_{N} so that n_{2} for G_{N} would not be maximal. See the chapter TGD View about Quasars or the article with the same title.

TGD View about Quasars
The work of Rudolph Schild and his colleagues Darryl Letier and Stanley Robertson (among others) suggests that quasars are not supermassive blackholes but something else  MECOs, magnetic eternally collapsing objects having no horizon and possessing magnetic moment. Schild et al argue that the same applies to galactic blackhole candidates and active galactic nuclei, perhaps even to ordinary blackholes as Abhas Mitra, the developer of the notion of MECO proposes. In the sequel TGD inspired view about quasars relying on the general model for how galaxies are generated as the energy of thickened cosmic strings decays to ordinary matter is proposed. Quasars would not be be blackhole like objects but would serve as an analog of the decay of inflaton field producing the galactic matter. The energy of the string like object would replace galactic dark matter and automatically predict a flat velocity spectrum. TGD is assumed to have standard model and GRT as QFT limit in long length scales. Could MECOs provide this limit? It seems that the answer is negative: MECOs represent still collapsing objects. The energy of inflaton field is replaced with the sum of the magnetic energy of cosmic string and negative volume energy, which both decrease as the thickness of flux tube increases. The liberated energy transforms to ordinary particles and their dark variants in TGD sense. Time reversal of blackhole would be more appropriate interpretation. One can of course ask, whether the blackhole candidates in galactic nuclei are time reversals of quasars in TGD sense. The writing of the article led also to a considerable understanding of two key aspects of TGD. The understanding of twistor lift and padic evolution of cosmological constant improved considerably. Also the understanding of gravitational Planck constant and the notion of spacetime as a covering space became much more detailed in turn allowing much more refined view about the anatomy of magnetic body. See the chapter TGD View about Quasars or the article with the same title.

Could dark protons and electrons be involved with dielectric breakdown in gases and conduction in electrolytes?
I have had long time the intuitive feeling that electrolytes are not really understood in standard chemistry and physics and I have expressed this feeling in the TGD model of "cold fusion" (see this). This kind of feeling of course induces immediate horror reaction turning stomach around. Not a single scientist in the world seems to be challenging the ageold chemical wisdom. Who am I to do this? Perhaps I really am the miserable crackpot that colleagues have for four decades told me to be. Do I realize only at the high age of 68 that my wise colleagues have have been right all the time? The question of my friend related to dielectric breakdown in gases led me to consider this problem more precisely. I will first consider dielectric breakdown and then ionic conduction in electrolytes from TGD point of view to see whether the hypothesis stating that dark matter consists of phases of ordinary matter with nonstandard Planck constant h_{eff}=nh_{0} (see this) could provide concrete insights to these phenomena. Ionization in dielectric breakdown One can start from a model for the dielectric breakdown of gas (see this). The basic idea is that negatively charged cathode emits electrons by tunnelling in electric field and these accelerate in the electric field and ionize atoms provided they travel a distance longer than the free path l= 1/nσ before collision. Here n is number density of atoms and σ collision cross section, in geometric approximation the cross sectional area of gas atom. This implies a lower bound on the number density n of gas atoms. On the other hand, too low density makes also ionizations rare. The positive ions in turn are absorbed by cathode and more electrons are liberated. In gas dielectric breakdown results if the field strength is above critical value E_{cr}. For air this one has E_{cr}=3 kV/mm.
One must now explain why ions can act as charged carriers in relatively weak electric fields. Concerning the production of electrons at electrode the situation remains the same. In electrolyte however the free path is much shorter than in gas since the density n is orders of magnitude higher. Therefore the ionization mechanism in electrolytes must be different  at least in standard physics framework. One can of course ask whether the large value of h_{eff} might help both in the generation of dark electron at cathode and also help to increase the free path of electron so that they gain higher energy in the electric field of electrolyte typically much lower that in dielectric breakdown. The mechanism for the dissolution of ions in water involves neither electrodes nor electric field. The ionization of NaCl in water serves as a good example.

The analogs of CKM mixing and neutrino oscillations for particle and its dark variants
The called 21cm anomaly meaning that there is unexpected absorption of this line could be due to the transfer of energy from gas to dark matter leading to a cooling of the gas. This requires em interaction of the ordinary matter with dark matter but the allowed value of electric charge must be must much smaller than elementary particle charges. In TGD Universe the interaction would be mediated by an ordinary photon transforming to dark photon having effective value h_{eff}/h_{0}=n larger than standard value h implying that em charge of dark matter particle is effectively reduced. Interaction vertices would involve only particles with the same value of h_{eff}/h_{0}=n. In this article a simple model for the mixing of ordinary photon and its dark variants is proposed. Due to the transformations between different values of h_{eff}/h_{0}=n during propagation, mass squared eigenstates are mixtures of photons with various values of n. An the analog of CKM matrix describing the mixing is proposed. Also the model for neutrino oscillations is generalized so that it applies  not only to photons  but to all elementary particles. The condition that "ordinary" photon is essentially massless during propagation forces to assume that during propagation photon is mixture of ordinary and dark photons, which would be both massive in absence of mixing. A reduction to ordinary photon would take place in the interaction vertices and therefore also in the absorption. The mixing provides a new contribution to particle mass besides that coming from padic thermodynamics and from the Kähler magnetic fields assignable to the string like object associated with the particle. See the chapter Quantum criticality and dark matter or the article The analogs of CKM mixing and neutrino oscillations for particle and its dark variants. 
Is dark DNA dark also in TGD sense
I encountered last year a highly interesting article about "dark DNA" hitherto found in the genome of gerbils and birds, for instance in the genome of of the sand rat living in deserts. I have written about this in another book but thought that it might a good idea to add it also here. The gene called Pdxl related to the production of insulin seems to be missing as also 87 other genes surrounding it! What makes this so strange that the animal cannot survive without these genes! Products that the instructions from the missing genes would create are however detected! According to the ordinary genetic, these genes cannot be missing but should be hidden, hence the attribute "dark" in analogy with dark matter. The dark genes contain A lot of G and C molecules and this kind of genes are not easy to detect: this might explain why the genes remain undetected. A further interesting observation is that one part of the sand rat genome has many more mutations than found in other rodent genomes and is also GC rich. Could the mutated genes do the job of the original genes? Missing DNA are found in birds too. For instance, the gene for leptin  a hormone regulating energy balance  seems to be missing. The finding is extremely interesting from TGD view point, where dark DNA has very concrete meaning. Dark matter at magnetic flux tubes is what makes matter living in TGD Universe. Dark variants of particles have nonstandard value h_{eff}=n× h_{0} (h= 6h_{0} is the most plausible option) of Planck constant making possible macroscopic quantum coherence among other things. Dark matter would serve as template for ordinary matter in living systems and biochemistry could be kind of shadow of the dynamics of dark matter. What I call dark DNA would correspond to dark analogs of atomic nuclei realized as dark proton sequences with entangled proton triplet representing DNA codon. The model predicts correctly the numbers of DNA codons coding for given aminoacid in the case of vertebrate genetic code and therefore I am forced to take it very seriously (see this and this). The chemical DNA strands would be attached to parallel dark DNA strands and the chemical representation would not be always perfect: this could explain variations of DNA. This picture inspires also the proposal that evolution is not a passive process occurring via random mutations with survivors selected by the evolutionary pressures. Rather, living system would have R&D lab as one particular department. Various variants of DNA would be tested by transcribing dark DNA to ordinary mRNA in turn translated to aminoacids to see whether the outcome survives. This experimentation might be possible in much shorter time scale than that based on random mutations. Also immune system, which is rapidly changing, could involve this kind of R&D lab. Also dark mRNA and aminoacids could be present but dark DNA is the fundamental information carrying unit and it would be natural to transcribe it to ordinary mRNA. Of course, also dark mRNA could be be produced and translated to aminoacids and even dark aminoacids could be transformed to ordinary ones. This would however require additional machinery. What is remarkable is that the missing DNA is indeed associated with DNA sequences with exceptionally high mutation rate. Maybe R&D lab is there! If so, the dark DNA would be dark also in TGD sense! Why GC richness should relate to this, is an interesting question. See the chapter Quantum criticality and dark matter. 
Is it possible to determine experimentally whether gravitation is quantal interaction?
Marletto and Vedral have proposed (thanks for link to Ulla) an interesting method for measuring whether gravitation is quantal interaction (see this). I tried to understand what the proposal suggests and how it translates to TGD language.

Did LIGO observe nonstandard value of G and are galactic blackholes really supermassive?
I have talked (see this) about the possibility that Planck length l_{P} is actually CP_{2} length R, which is scaled up by factor of order 10^{3.5} from the standard Planck length. The basic formula for Newton's constant G would be a generalization of the standard formula to give G= R^{2}/ℏ_{eff}. There would be only one fundamental scale in TGD as the original idea indeed was. ℏ_{eff} at "standard" flux tubes mediating gravitational interaction (gravitons) would be by a factor about n∼ 10^{6}10^{7} times larger than h. Also other values of h_{eff} are possible. The mysterious small variations of G known for a long time could be understood as variations for some factors of n. The fountain effect in superfluidity could correspond to a value of h_{eff}/h_{0}=n larger than standard value at gravitational flux tubes increased by some integer factor. The value of G would be reduced and allow particles to get to higher heights already classically. In Podkletnov effect some factor og n would increase and g would be reduced by few per cent. Larger value of h_{eff} would induce also larger delocalization height. Also smaller values are possible and in fact, in condensed matter scales it is quite possible that n is rather small. Gravitation would be stronger but very difficult to detect in these scales. Neutron in the gravitational field of Earth might provide a possible test. The general rule would be that the smaller the scale of dark matter dynamics, the larger the value of G and maximum value would be G_{max}= R^{2}/h_{0}, h=6h_{0}. Are the blackholes detected by LIGO really so massive? LIGO (see this) has hitherto observed 3 fusions of black holes giving rise to gravitational waves. For TGD view about the findings of LIGO see this and this. The colliding blackholes were deduced to have unexpectedly larger large masses: something like 1040 solar masses, which is regarded as something rather strange. Could it be that the masses were actually of the order of solar mass and G was actually larger by this factor and h_{eff} smaller by this factor?! The mass of the colliding blackholes could be of order solar mass and G would larger than its normal value  say by a factor in the range [10,50]. If so, LIGO observations would represent the first evidence for TGD view about quantum gravitation, which is very different from superstring based view. The fourth fusion was for neutron stars rather than black holes and stars had mass of order solar mass. This idea works if the physics of gravitating system depends only on G(M+m). That classical dynamics depends on G(M+m) only, follows from Equivalence Principle. But is this true also for gravitational radiation?
What about supermassive galactic blacholes? What about supermassive galactic black holes in the centers of galaxies: are they really supermassive or is G superlarge! The mass of Milky Way supermassive blackhole is in the range 10^{5}10^{9} solar masses. Geometric mean is n=10^{7} solar masses and of the order of the standard value of R^{2}/G_{N}=n ∼ 10^{7} . Could one think that this blackhole has actually mass in the range 1100 solar masses and assignable to an intersection of galactic cosmic string with itself! How galactic blackholes are formed is not well understood. Now this problem would disappear. Galactic blackholes would be there from the beginning! The general conclusion is that only gravitational radiation allows to distinguish between different masses (M+m) for given G(M+m) in a system consisting of two masses so that classically scaling the opposite scalings of G and M is a symmetry. See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?. 
Deviation from the prediction of standard quantum theory for radiative energy transfer in faraway region
I encountered in FB a highly interesting finding discussed in two popular articles (see this and this). The original article (see this) is behind paywall but one can find the crucial figure 5 online (see this) . It seems that experimental physics is in the middle of revolution of century and theoretical physicists straying in superstring landscape do not have a slightest idea about what is happening. The size scale of objects studied  membranes in temperature of order room temperature T=300 K for instance  is about 1/2 micrometers: cell length scale range is in question. They produce radiation and other similar object is heated if there is temperature difference between the objects. The heat flow is proportional to the temperature difference and radiative conductance called G_{rad} characterizes the situation. Planck's black body radiation law, which initiated the development of quantum theory for more than century ago, predicts G_{rad} at large enough distances.
My guess is that this unavoidably means beginning of the second quantum revolution brought by the hierarchy of Planck constants. These experimental findings cannot be put under the rug anymore. See the chapter Quantum criticality and dark matter or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

Galois groups and genes
The question about possible variations of G_{eff} (see this) led again to the old observation that subgroups of Galois group could be analogous to conserved genes in that they could be conserved in number theoretic evolution. In small variations such as variation of Galois subgroup as analogs of genes would change G only a little bit. For instance, the dimension of Galois subgroup would change slightly. There are also big variations of G in which new subgroup can emerge. The analogy between subgoups of Galois groups and genes goes also in other direction. I have proposed long time ago that genes (or maybe even DNA codons) could be labelled by h_{eff}/h=n . This would mean that genes (or even codons) are labelled by a Galois group of Galois extension (see this) of rationals with dimension n defining the number of sheets of spacetime surface as covering space. This could give a concrete dynamical and geometric meaning for the notin of gene and it might be possible some day to understand why given gene correlates with particular function. This is of course one of the big problems of biology. One should have some kind of procedure giving rise to hierarchies of Galois groups assignable to genes. One would also like to assign to letter, codon and gene and extension of rationals and its Galois group. The natural starting point would be a sequence of so called intermediate Galois extensions E^{H} leading from rationals or some extension K of rationals to the final extension E. Galois extension has the property that if a polynomial with coefficients in K has single root in E, also other roots are in E meaning that the polynomial with coefficients K factorizes into a product of linear polynomials. For Galois extensions the defining polynomials are irreducible so that they do not reduce to a product of polynomials. Any subgroup H⊂ Gal(E/K)) leaves the intermediate extension E^{H} invariant in elementwise manner as a subfield of E (see this). Any subgroup H⊂ Gal(E/K)) defines an intermediate extension E^{H} and subgroup H_{1}⊂ H_{2}⊂... define a hierarchy of extensions E^{H1}>E^{H2}>E^{H3}... with decreasing dimension. The subgroups H are normal  in other words Gal(E) leaves them invariant and Gal(E)/H is group. The order H is the dimension of E as an extension of E^{H}. This is a highly nontrivial piece of information. The dimension of E factorizes to a product ∏_{i} H_{i} of dimensions for a sequence of groups H_{i}. Could a sequence of DNA letters/codons somehow define a sequence of extensions? Could one assign to a given letter/codon a definite group H_{i} so that a sequence of letters/codons would correspond a product of some kind for these groups or should one be satisfied only with the assignment of a standard kind of extension to a letter/codon? Irreducible polynomials define Galois extensions and one should understand what happens to an irreducible polynomial of an extension E^{H} in a further extension to E. The degree of E^{H} increases by a factor, which is dimension of E/E^{H} and also the dimension of H. Is there a standard manner to construct irreducible extensions of this kind?
See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?. 
Is the hierarchy of Planck constants behind the reported variation of Newton's constant?
It has been known for long time that the measurements of G give differing results with differences between measurements larger than the measurement accuracy (see this and this). This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants h_{eff}=nh_{0}, h=6h_{0} together with the condition that theory contains CP_{2} size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R^{2}/hbar_{eff}, where R replaces Planck length ( l_{P}= (ℏ G^{1/2}→ l_{P}=R) and hbar_{eff}/h is in the range 10^{6}10^{7}. The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbar_{eff} inducing scaling G is accompanied by opposite scaling of M^{4} coordinates in M^{4}× CP_{2}: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case h_{eff}=h_{gr}=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v_{0}/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m. In this article I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying h_{eff}. Also a model for fountain effect of superfluidity as delocalization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of h_{eff} due to superfluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of G_{eff} at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology. See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?. 
How could Planck length be actually equal to much larger CP_{2} radius?!
The following argument stating that Planck length l_{P} equals to CP_{2} radius R: l_{P}=R and Newton's constant can be identified G= R^{2}/ℏ_{eff}. This idea looking nonsensical at first glance was inspired by an FB discussion with Stephen Paul King. First some background.
To get some perspective, consider first the phase transition replacing hbar and more generally hbar_{eff,i} with hbar_{eff,f}=h_{gr} .
See the chapter About the Nottale's formula for h_{gr} and the possibility that Planck length l_{P} and CP_{2} length R are identical giving G= R^{2}/ℏ_{eff} or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant. 
Unexpected support for nuclear string model
Nuclear string model (see this) replaces in TGD framework the shell model. Completely unexpected support for nuclear string model emerged from a research published by CLAS Collaboration in Nature (see this). The popular article "Protons May Have Outsize Influence on Properties of Neutron Stars" refers to possible implications for the understanding of neutron stars but my view is that the implications might dramatically modify the prevailing view about nuclei themselves. The abstract of popular article reads as (see this). "A study conducted by an international consortium called the CLAS Collaboration, made up of 182 members from 42 institutions in 9 countries, has confirmed that increasing the number of neutrons as compared to protons in the atom’s nucleus also increases the average momentum of its protons. The result, reported in the journal Nature, has implications for the dynamics of neutron stars." The finding is that protons tend to pair with neutrons. If the number of neutrons increases, the probability for the pairing increases too. The binding energy of the pair is liberated as kinetic energy of the pair  rather than becoming kinetic energy of proton as the popular text inaccurately states. Pairing does not fit with shell model in which proton and neutron shells correlate very weakly. The weakness of protonneutron correlations in nuclear shell model looks somewhat paradoxical in this sense since  as text books tell to us  it is just the attractive strong interaction between neutron and proton, which gives rise to the nuclear binding. In TGD based view about nucleus protons and neutrons are connected to nuclear strings with short color flux tubes connecting nucleons so that one obtains what I call nuclear string (see this). These color flux tubes would bind nucleons rather than nuclear force in the conventional sense. What can one say about correlations between nucleons in nuclear string model? If the nuclear string has low string tension, one expects that nucleons far away from each other are weakly correlated but neighboring nuclei correlate strongly by the presence of the color flux tube connecting them. Minimization of repulsive Coulomb energy would favor protons with neutrons as nearest neighbors so that pairing would be favored. For instance, one could have nnn... near the ends of the nuclear string and pnpn... in the middle region and strong correlations and higher kinetic energy. Even more neutrons could be between protons if the nucleus is neutron rich. This could also relate to neutron halo and the fact that the number of neutrons tends to be larger than that of protons. Optimistic could see the experimental finding as a support for nuclear string model. Color flux tubes can certainly have charge 0 but also charges 1 and 1 are possible since the string has quark and antiquark at its ends giving uubar, ddbar, udbar, dubar with charges 0,0,1,+1. Proton plus color flux tube with charge 1 would effectively behave as neuron. Could this kind of pseudo neutrons exist in nucleus? Or even more radically: could all neurons in the nucleus be this kind of pseudo neutrons? The radical view conforms with the model of dark nuclei as dark proton sequences  formed for instance in Pollack effect (see this)  in which some color bonds can become also negatively charged to reduce Coulomb repulsion. Dark nuclei have scaled down binding energy and scaled up size. They can decay to ordinary nuclei liberating almost all ordinary nuclear binding energy: this could explaining "cold fusion" (see this). See the chapter Nuclear string model. 
About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant
Nottale's formula for the gravitational Planck constant hbar_{gr}= GMm/v_{0} involves parameter v_{0} with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v_{0}  or equivalently the dimensionless parameter β_{0}=v_{0}/c (to be used in the sequel) appearing in the formula has remained open hitherto. In the following a possible interpretation based on manysheeted spacetime concept, manysheeted cosmology, and zero energy ontology (ZEO) is discussed. A generalization of the Hubble formula β=L/L_{H} for the cosmic recession velocity, where L_{H}= c/H is Hubble length and L is radial distance to the object, is suggestive. This interpretation would suggest that some kind of expansion is present. The fact however is that stars, planetary systems, and planets do not seem to participate cosmic expansion. In TGD framework this is interpreted in terms of quantal jerkwise expansion taking place as relative rapid expansions analogous to atomic transitions or quantum phase transitions. The TGD based variant of Expanding Earth model assumes that during Cambrian explosion the radius of Earth expanded by factor 2. There are two measures for the size of the system. The M^{4} size L_{M4} is identifiable as the maximum of the radial M^{4} distance from the tip of CD associated with the center of mass of the system along the lightlike geodesic at the boundary of CD. System has also size L_{ind} defined defined in terms of the induced metric of the spacetime surface, which is spacelike at the boundary of CD. One has L_{ind}<L_{M4}. The identification β_{0}= L_{M4}/L_{H}<1 does not allow the identification L_{H}=L_{M4}. L_{H} would however naturally corresponds to the size of the magnetic body of the system in turn identifiable as the size of CD. One can deduce an estimate for β_{0} by approximating the spacetime surface near the lightcone boundary as RobertsonWalker cosmology, and expressing the mass density ρ defined as ρ=M/V_{M4}, where V_{M4}=(4π/3) L_{M4}^{3} is the M^{4} volume of the system. ρ can be expressed as a fraction ε^{2} of the critical mass density ρ_{cr}= 3H^{2}/8π G. This leads to the formula β_{0}= [r_{S}/L_{M4}]^{1/2} × (1/ε), where r_{S} is Schwartschild radius. This formula is tested for planetary system and Earth. The dark matter assignable to Earth can be identified as the innermost part of inner core with volume, which is .01 per cent of the volume of Earth. Also the consistency of the Bohr quantization for dark and ordinary matter is discussed and leads to a number theoretical condition on the ratio of the ordinary and dark masses. See the chapter Quantum Criticality and dark matter or the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant. 
An island at which body size shrinks
I encountered in Facebook an article claiming that the bodies of animals shrink at the island of Flores belonging to Indonesia. This news is not Dog's days news (Dog's days news is a direct translation from the finnish synonym for fake news). Both animals and humans really are claimed to have shrinked in size. The bodies of both hominins (predecessors of humans, humans, ane even elephants) have shrinked at Flores.

Badly behaving photons again
I wrote about two years ago about strange halving of the unit of angular momentum for photons. The article had title Badly behaving photons and spacetime as 4surface). Now I encountered a popular article (see this) telling about this strange halving of photon angular momentum unit two years after writing the above comments. I found nothing new but my immediate reaction was that the finding could be seen as a direct proof for h_{eff}=nh_{0} hierarchy, where h_{0} is the minimal value of Planck constants, which need not be ordinary Planck constant h as I have often assumed in previous writings. Various arguments indeed support for h=6h_{0}. This hypothesis would explain the strange findings about hydrogen atom having what Mills calls hydrino states having larger binding energy than normal hydrogen atom (see this): the increase of the binding energy would follow from the proportionality of the binding energy to 1/h_{eff}^{2}. For n_{0}=6→ n<6 the binding energy is scale up as (n/6)^{2}. The values of n=1,2,3 dividing n are preferred. Second argument supporting h=6h_{0} comes from the model for the color vision (see this). What is the interpretation of the ordinary photon angular momentum for n=n_{0}= 6? Quantization for angular momentum as multiples of hbar_{0} reads as l= l_{0}hbar_{0}= (l_{0}/6)hbar, l_{0}=1,2... so that fractional angular momenta are possible. l_{0}=6 gives the ordinary quantization for which the wave function has same value for all 6 sheets of the covering. l_{0}=3 gives the claimed halfquantization. See the chapter Quantum Criticality and dark matter or the article Badly behaving photons and spacetime as 4surface. 
Two new findings related to high Tc superconductivity
I learned simultaneously about two findings related to high Tc superconductivity leading to a proposal of a general mechanism of biocontrol in which small signal can serve as a control knob inducing phase transition producing macroscopically quantum coherent large h_{eff} phases in living matter. 1. High Tc superconductivity at room temperature and pressure Indian physicists Kumar Thapa and Anshu Pandey have found evidence for superconductivity at ambient (room) temperature and pressure in nanostructures (see this). There are also earlier claims about room temperature superconductivity that I have discussed in my writings. 1.1 The effect Here is part of the abstract of the article of Kumar Thapa and Anshu Pandey. We report the observation of superconductivity at ambient temperature and pressure conditions in films and pellets of a nanostructured material that is composed of silver particles embedded into a gold matrix. Specifically, we observe that upon cooling below 236 K at ambient pressures, the resistance of sample films drops below 104 Ohm, being limited by instrument sensitivity. Further, below the transition temperature, samples become strongly diamagnetic, with volume susceptibilities as low as 0.056. We further describe methods to tune the transition to temperatures higher than room temperature. During years I have developed a TGD based model of high Tc superconductivity and of biosuperconductivity (see this and this). Dark matter is identified as phases of ordinary matter with nonstandard value h_{eff}/h=n of Planck constant (see this) (h=6h_{0} is the most plausible option). Charge carriers are h_{eff}/h_{0}=n dark macroscopically quantum coherent phases of ordinary charge carriers at magnetic flux tubes along which the supra current can flow. The only source of dissipation relates to the transfer of ordinary particles to flux tubes involving also phase transition changing the value of h_{eff}. This superconductivity is essential also for microtubules exhibit signatures for the generation of this kind of phase at critical frequencies of AC voltages serving as a metabolic energy feed providing for charged particles the needed energy that they have in h_{eff}/h_{0}=n phase. Large h_{eff} phases with same parameters than ordinary phase have typically energies large than ordinary phase. For instance. Atomic binding energies scale like 1/h_{eff}^{2} and cyclotron energies and harmonic oscillator energies quite generally like h_{eff}. Free particle in box is however quantum critical in the sense that the energy scale E= hbar_{eff}^{2}/2mL^{2} does not depend on the h_{eff} if one has L∝ h_{eff}. At spacetime level this is true quite generally for external (free) particles identified as minimal 4surfaces. Quantum criticality means independence on various coupling parameters. What is interesting is that Ag and Au have single valence electron. The obvious guess would be that valence electrons become dark and form Cooper pairs in the transition to superconductivity. What is interesting that the basic claim of a layman researcher David Hudson is that ORMEs or monoatomic elements as he calls them include also Gold. These claims are not of course taken seriously by academic researchers. In the language of quantum physics the claim is that ORMEs behave like macroscopic quantum systems. I decided to play with the thought that the claims are correct and this hypothesis served later one of the motivations for the hypothesis about dark matter as large h_{eff} phases: this hypothesis follows from adelic physics (see this), which is a number theoretical generalization of ordinary real number based physics. TGD explanation of high Tc superconductivity and its biological applications strongly suggest that a feed of "metabolic" energy is a prerequisite of high Tc superconductivity quite generally. The natural question is whether experimenters might have found something suggesting that the external energy feed  usually seen as a prerequisite for selforganization  is involved with high Tc superconductivity. During same day I got FB link to another interesting finding related to high Tc superconductivity in cuprates and suggesting positive answer to this question! 1.2 The strange observation of Brian Skinner about the effect After writing the above comments I learned from a popular article (see this) about and objection (see this) challenging the claimed discovery (see this). The claimed finding received a lot of attention and physicist Brian Skinner in MIT decided to test the claims. At first the findings look quite convincing to him. He however decided to look for the noise in the measured value of volume susceptibility χ_{V}. χ_{V} relates the magnetic field B in superconductor to the external magnetic field B_{ext} via the formulate B= (1+χ_{V})B_{ext} (in units with μ_{0}=1 one has B_{ext}=H, where H is used usually). For diamagnetic materials χ_{V} is negative since they tend to repel external magnetic fields. For superconductors one has χ_{V}=1 in the ideal situation. The situation is not however ideal and stepwise change of χ_{V} from χ_{V}=0 to χ_{V} to some negative value but satisfying μ_{V} <1 serves as a signature of high Tc superconductivity. Both superconducting and ordinary phase would be present in the sample. Figure 3a of the article of authors gives χ_{V} as function of temperature for some values of B_{ext} with the color of the curve indicating the value of B_{ext}. Note that μ_{V} depends on B_{ext}, whereas in strictly linear situtation it would not do so. There is indeed transition at critical temperature T_{c}= 225 K reducing χ_{V}=0 to negative value in the range χ_{V} ∈ [0.05 ,.06 ] having no visible temperature dependence but decreasing somewhat with B_{ext}. The problem is that the fluctuations of χ_{V} for green curve (B_{ext}=1 Tesla) and blue curve (B_{ext}=0.1 Tesla) have the same shape. With blue curve only only shifted downward relative to the green one (shifting corresponds to somewhat larger diamagnetism for lower value of B_{ext}). If I have understood correctly, the finding applies only to these two curves and for one sample corresponding to Tc= 256 K. The article reports superconductivity with Tc varying in the range [145,400] K. The pessimistic interpretation is that this part of data is fabricated. Second possibility is that human error is involved. The third interpretation would be that the random looking variation with temperature is not a fluctuation but represents genuine temperature dependence: this possibility looks infeasible but can be tested by repeating the measurements or simply looking whether it is present for the other measurements. 1.3 TGD explanation of the effect found by Skinner One should understand why the effect found by Skinner occurs only for certain pairs of magnetic fields strengths B_{ext} and why the shape of pseudo fluctuations is the same in these situations. Suppose that B_{ext} is realized as flux tubes of fixed radius. The magnetization is due to the penetration of magnetic field to the ordinary fraction of the sample as flux tubes. Suppose that the superconducting flux tubes assignable 2D surfaces as in high Tc superconductivity. Could the fraction of superconducting flux tubes with nonstandard value of h_{eff}  depends on magnetic field and temperature in predictable manner? The pseudo fluctuation should have same shape as a function temperature for the two values of magnetic fields involved but not for other pairs of magnetic field strengths.
2. Transition to high Tc superconductivity involves positive feedback The discovery of positive feedback in the transition to hight Tc superconductivity is described in the popular article " Physicists find clues to the origins of hightemperature superconductivity" (see this). Haoxian Li et al at the University of Colorado at Boulder and the Ecole Polytechnique Federale de Lausanne have published a paper on their experimental results obtained by using ARPES (Angle Resolved Photoemission Spectroscopy) in Nature Communications (see this). The article reports the discovery of a positive feedback loop that greatly enhances the superconductivity of cupra superconductors. The abstract of the article is here. Strong diffusive or incoherent electronic correlations are the signature of the strangemetal normal state of the cuprate superconductors, with these correlations considered to be undressed or removed in the superconducting state. A critical question is if these correlations are responsible for the hightemperature superconductivity. Here, utilizing a development in the analysis of angleresolved photoemission data, we show that the strangemetal correlations don’t simply disappear in the superconducting state, but are instead converted into a strongly renormalized coherent state, with stronger normal state correlations leading to stronger superconducting state renormalization. This conversion begins well above Tc at the onset of superconducting fluctuations and it greatly increases the number of states that can pair. Therefore, there is positive feedback––the superconductive pairing creates the conversion that in turn strengthens the pairing. Although such positive feedback should enhance a conventional pairing mechanism, it could potentially also sustain an electronic pairing mechanism. The explanation of the positive feedback in TGD TGD framework could be following. The formation of dark electrons requires "metabolic" energy. The combination of dark electrons to Cooper pairs however liberates energy. If the liberated energy is larger than the energy needed to transform electron to its dark variant it can transform more electrons to dark state so that one obtains a spontaneous transition to high Tc superconductivity. The condition for positive feedback could serve as a criterion in the search for materials allowing high Tc superconductivity. The mechanism could be fundamental in TGD inspired quantum biology. The spontaneous occurrence of the transition would make possible to induce large scale phase transitions by using a very small signal acting therefore as a kind of control knob. For instance, it could apply to biosuperconductivity in TGD sense, and also in the transition of protons to dark proton sequences giving rise to dark analogs of nuclei with a scaled down nuclear binding energy at magnetic flux tubes explaining Pollack effect. This transition could be also essential in TGD based model of "cold fusion" based also on the analog of Pollack effect. It could be also involved with the TGD based model for the finding of macroscopic quantum phase of microtubules induced by AC voltage at critical frequencies (see this). See the chapter Quantum criticality and dark matter or the article Two new findings related to high Tc superconductivity. 
Two different values for the metallicity of Sun and heating of solar corona: two puzzles with a common solution?
Solar corona could be also a seat of dark nucleosynthesis and there are indications that this is the case (see this) . The metallicity of stellar objects gives important information about its size, age, temperature, brightness, etc... The problem is that measurements give two widely different values for the metallicity of Sun depending on how one measures it. One obtains 1.3 per cent from the absorption lines of the radiation from Sun and 1.8 from solar seismic data. Solar neutrinos give also the latter value. What could cause the discrepancy? Problems do not in general appear alone. There is also a second old problem: what is the origin of the heating of the solar corona. Where does the energy needed for the heating come from? TGD proposal is based on a model, which emerged initially as a model for "cold fusion" (not really) in terms of dark nucleosynthesis, which produced dark scaled up variants of ordinary nuclei as dark proton sequences with much smaller binding energy. This can happen even in living matter: Pollack effect involving irradiation by IR light of water bounded by gel phase creates negatively charged regions from which part of protons go somewhere. They could go to magnetic flux tubes and form dark nuclei. This could explain the reported transmutations in living matter not taken seriously by academic nuclear physicists. TGD proposal is that the protons transform to dark proton sequences at magnetic flux tubes with nonstandard value of Planck constant h_{eff}/h_{0}=n. Dark nuclei with scaled up size. Dark nuclei can transform to ordinary nuclei by h_{eff}→ h (h= 6h_{0} is the most plausible option) and liberate almost all nuclear binding energy in the process. The outcome would be "cold fusion". This leads to a vision about prestellar evolution. First came the dark nucleosynthesis, which heated the system and eventually led to a temperature at which the ordinary nuclear fusion started. This process could occur also outside stellar cores  say in planet interiors  and a considerable part of nuclei could be created outside star. A good candidate for the site of dark nucleosynthesis would be solar corona . Dark nucleosynthesis could heat the corona and create metals also here. They would absorb the radiation coming from the solar core and reduce the measured effective metallicity to 1.3 per cent. See the chapter Cold fusion again or the article Morphogenesis in TGD Universe . 
About Comorosan effect in the clustering of RNA II polymerase proteins
The time scales τ equal 5, 10, and 20 seconds appear in the clustering of RNA II polymerase proteins and Mediator proteins (see this and the previous posting). What is intriguing that so called Comorosan effect involves time scale of 5 seconds and its multiples claimed by Comorosan long time ago to be universal time scales in biology. The origin of these time scales has remained more or less a mystery although I have considered several TGD inspired explanations for this time scale is based on the notion of gravitational Planck constant (see this). One can consider several starting point ideas, which need not be mutually exclusive.
To sum up, the model suggests that the idealization of flux tubes as kind of universal Josephson junctions. The model is consistent with biophoton hypothesis. The constraints on h_{gr}= GM_{D}m/v_{0} are consistent with the earlier views and allows to assign Comorosan time scale 5 seconds to proton and nerve pulse time scale to electron as Josephson time scales. This inspires the question whether the dynamics of biocatalysis and nerve pulse generation be seen as scaled variants of each other at quantum level? This would not be surprising if MB controls the dynamics. The earlier assumption that B_{end}=0.2 Gauss is minimal value for B_{end} must be replaced with the assumption that it is maximal value of B_{end}. See the chapter Quantum Criticality and dark matter or the article Clustering of RNA polymerase molecules and Comorosan effect. 
Why do RNA polymerase molecules cluster?
I received a link to a highly interesting popular article telling about the work of Ibrahim Cisse at MIT and colleagues (see this): at this time about clustering of proteins in the transcription of RNA. Similar clustering has been observed already earlier and interpreted as a phase separation Similar clustering has been observed already earlier and interpreted as a phase separation (see this). Now this interpretation is not proposed by experiments but experimenters say that it is quite possible but they cannot prove it. I have already earlier discussed the coalescence of proteins into droplets as this kind of process in TGD framework. The basic TGD based ideas is that proteins  and biomolecules in general  are connected by flux tubes characterized by the value of Planck constant h_{eff}=n× h_{0} for the dark particles at the flux tube. The higher the value of n is the larger the energy of given state. For instance, the binding energies of atoms decrease like 1/n^{2}. Therefore the formation of the molecular cluster liberates energy usable as metabolic energy. Remark: h_{0} is the minimal value of h_{eff}. The best guess is that ordinary Planck constant equals to h=6h_{0} (see this and this). TGD view about the findings Gene control switches  such as RNA II polymerases in the DNA transcription to RNA  are found to form clusters called superenhancers. Also so called Mediator proteins form clusters. In both cases the number of members is in the range 200400. The clusters are stable but individual molecules spend very brief time in them. Clusers have average lifetime of 5.1±.4 seconds. Why the clustering should take place? Why large number of these proteins are present although single one would be enough in the standard picture. In TGD framework one can imagine several explanations. One can imagine at least following reasons.
See the chapter Quantum Criticality and dark matter or the article Clustering of RNA polymerase molecules and Comorosan effect. 
The discovery of "invisible visible matter" and more detailed view about dark prenuclear physics
That 30 per cent of visible matter has remained invisible is not so wellknown problem related to dark matter. It is now identified and assigned to the network of filaments in intergalactic space. Reader can consult the popular article "Researchers find last of universe's missing ordinary matter" (see this). The article "Observations of the missing baryons in the warmhot intergalactic medium" by Nicastro et al (see this) describes the finding at technical level. Note that warmhot refers to the temperature range 10^{5}10^{6} K. In TGD framework one can interpret the filament network as as a signature of flux tubes/cosmic string network to which one can assign dark matter and dark energy. The interpretation could be that the "invisible visible" matter emerges from the network of cosmic strings as part of dark energy is transformed to ordinary matter. This is TGD variant of inflationary scenario with inflaton vacuum energy replaced with cosmic strings/flux tubes carrying dark energy and matter. This inspires more detailed speculations about prestellar physics according to TGD. The questions are following. What preceded the formation of stellar cores? What heated the matter to the needed temperatures? The TGD inspired proposal is that it was dark nuclear physics (see the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?). Dark nuclei with h_{eff}=n× h_{0} were formed first and these decayed to ordinary nuclei or dark nuclei with smaller value of h_{eff}=n× h_{0} and heated the matter so that ordinary nuclear fusion became possible. Remark: h_{0} is the minimal value of h_{eff}. The best guess is that ordinary Planck constant equals to h=6h_{0} (see this and this).

Did animal mitochondrial evolution have a long period of stagnation?I encountered an interesting popular article telling about findings challenging Darwin's evolutionary theory. The original article of Stoeckle and Thaler is here. The conclusion of the article is that almost all animals, 9 out of 10 animal species on Earth today, including humans, would have emerged about 100,000 200,000 years ago. According to Wikipedia all animals are assumed to have emerged about 650 million years ago from a common ancestor. Cambrian explosion began around 542 million years ago. According to Wikipedia Homo Sapiens would have emerged 300,000800,000 years ago. On basis of Darwin's theory based on survival of the fittest and adaptation to a new environment, one would expect that the species such as ants and humans with large populations distributed around the globe become genetically more diverse over time than the species living in the same environment. The study of so called neutral mutations not relevant for survival and assumed to occur with some constant rate however finds that this is not the case. The study of so called mitochondrial DNA barcodes across 100,000 species showed that the variation of neutral mutations became very small about 100,000200,00 years ago. One could say that the evolution differentiating between them began (or effectively began) after this time. As if mitochondrial clocks for these species would have been reset to zero at that time as the article states it This is taken as a support for the conclusion that all animals emerged about the same time as humans. The proposal of (at least ) the writer of popular article is that the life was almost wiped out by a great catastrophe and extraterrestrials could have helped to start the new beginning. This brings in mind Noah's Ark scenario. But can one argue that humans and the other animals emerged at that time: were they only survivors from a catastrophe. One can also argue that the rate of mitochondrial mutations increased dramatically for some reason at that time. Could one think that great evolutionary leap initiating the differentiation of mitochondrial genomes at that time and that before it the differentiation was very slow for some reason? Why this change would have occurred simultaneously in almost all animals? Something should have happened to the mitochondria and what kind of external evolutionary pressure could have caused it?

The experiments of Masaru Emoto with emotional imprinting of waterSini Kunnas sent a link to a video telling about experiments of Masaru Emoto (see this) with water, which is at criticality with respect to freezing and then frozen. Emoto reports is that words expressing emotions are transmitted to water: positive emotions tend to generate beautiful crystal structures and negative emotions ugly ones. Also music and even pictures are claimed to have similar effects. Emoto has also carried out similar experiments with rice in water. Rice subjected to words began to ferment and water subject to words expressing negative emotions began to rotten. Remark: Fermentation is a metabolic process consuming sugar in absence of oxygen. Metabolism is a basic signature of life so that at least in this aspect the water+rice system would become alive. The words expressing positive emotions or even music would serve as a signal "waking up" the system. One could define genuine skeptic as a person who challenges existing beliefs and pseudoskeptic (PS in the sequel) as a person challenging  usually denying  everything challenging the mainstream beliefs. The reception of the claims of Emoto is a representative example about the extremely hostile reactions of PSs as aggressive watchdogs of materialistic science towards anything that challenges their belief system. The psychology behind this attitude is same as behind religious and political fanatism. I must emphasize that I see myself as a thinker and regard myself as a skeptic in the oldfashioned sense of the word challenging the prevailing world view rather than phenomena challenging the prevailing world view. I do not want to be classified as believer or nonbeliever. The fact is that if TGD inspired theory of consciousness and quantum biology describes reality, a revolution in the world view is unavoidable. Therefore it is natural to consider the working hypothesis that the effects are real and see what the TGD based explanation for them could be. The Wikipedia article about Masaru Emoto (see this) provides a good summary of the experiments of Emoto and provides a lot of links so that I will give here only a brief sketch. According to the article Emoto believed that water was a "blueprint for our reality" and that emotional "energies" and "vibrations" could change the physical structure of water. The water crystallization experiments of Emoto consisted of exposing water in glasses to different words, pictures or music, and then freezing and examining the aesthetic properties of the resulting crystals with microscopic photography. Emoto made the claim that water exposed to positive speech and thoughts would result in visually "pleasing" crystals being formed when that water was frozen, and that negative intention would yield "ugly" crystal formations. In 2008, Emoto and collaborators published and article titled "DoubleBlind Test of the Effects of Distant Intention on Water Crystal Formation" about his about experiments with water in the Journal of Scientific Exploration, a peer reviewed scientific journal of the Society for Scientific Explorations (see this). The work was performed by Masaru Emoto and Takashige Kizu of Emoto’s own IHM General Institute, along with Dean Radin and Nancy Lund of the Institute of Noetic Sciences, which is on Stephen Barrett's Quackwatch (see this) blacklist of questionable organizations. PSs are the modern jesuits and for jesuits the end justifies the means. Emoto has also carried experiments with rice samples in water. There are 3 samples. First sample "hears" words with positive emotional meaning, second sample words with negative emotional meaning, and the third sample serving as a control sample. Emoto reports (see this) that the rice subjected to words with positive emotional content began to ferment whereas water subject to words expressing negative emotions began to rotten. The control sample also began to rotten but not so fast. In the article The experiments of Masaru Emoto with emotional imprinting of water I will consider the working hypothesis that the effects are real, and develop an explanation based on TGD inspired quantum biology. The basic ingredients of the model are following: magnetic body (MB) carrying dark matter as h_{eff}/h=n phases of ordinary matter; communications between MB and biological body (BB) using dark photons able to transform to ordinary photons identifiable as biophotons; the special properties of water explained in TGD framework by assuming dark component of water implying that criticality for freezing involves also quantum criticality, and the realization of genetic code and counterparts of the basic biomolecules as dark proton sequences and as 3chords consisting of light or sound providing a universal language allowing universal manner to express emotions in terms of bioharmony realized as music of light or sound. The entanglement of water sample and the subject person (with MBs included) realized as flux tube connections would give rise to a larger conscious entity expressing emotions via language realized in terms of basic biomolecules in a universal manner by utilizing genetic code realized in terms of both dark proton sequences and music of light of light and sound. See the chapter Dark Nuclear Physics and Condensed Matter or the article The experiments of Masaru Emoto with emotional imprinting of water.

How molecules in cells "find" one another and organize into structures?The title of the popular article How molecules in cells 'find' one another and organize into structures expresses an old problem of biology. Now the group led by Amy S. Gladfelter has made experimental progress in this problem. The work has been published in Science (see this). It is reported that RNA molecules recognize each other to condense into the same droplet due to the specific 3D shapes that the molecules assume. Molecules with complementary base pairing can find each other and only similar RNAs condense on same droplet. This brings in mind DNA replication, transcription and translation. Furthermore, the same proteins that form liquid droplets in healthy cells, solidify in diseases like neurodegenerative disorders. Some kind of phase transition is involved with the process but what brings the molecules together remains still a mystery. The TGD based solution of this mystery is one of the first applications of the notion of manysheeted spacetime in biology, and relies on the notion of magnetic flux tubes connecting molecules to form networks. Consider first TGD based model about condensed and living matter. As a matter fact, the core of this model applies in all scales. What is new is there are not only particles but also bonds connecting them. In TGD they are flux tubes which can carry dark particles with nonstandard value h_{eff}/h=n of Planck constant. In EREPR approach in fashion they would be wormholes connecting distance spacetime regions. In this case the problem is instability: wormholes pinch and split. In TGD monopole magnetic flux takes care of the stability topologically. The flux tube networks occur in all scales but especially important are biological length scales.
See the chapter Quantum Criticality and Dark Matter.

Maxwell's lever rule and expansion of water in freezing: two poorly understood phenomenaThe view about condensed matter as a network with nodes identifiable as molecules and bonds as flux tubes is one of the basic predictions of TGD and obviously means a radical modification of the existing picture. In the sequel two old anomalies of standard physics are explained in this conceptual framework. The first anomaly was known already at the time of Maxwell. In critical region for gas liquidphase transition van der Waals equation of state fails. Empirically the pressure in critical region depends only on temperature and is independent on molecular volume whereas van der Waals predicting cusp catastrophe type behavior predicts this dependence. This problem is quite general and plagues all analytical models based on statistical mechanics. Maxwell's area rule and lever rule is the proposed modification of van der Waals in critical region. There are two phases corresponding to liquid and gas in the same pressure and the proportions of the phases vary so that the volume varies. The lever rule used for metal allows allows to explain the mixture but requires that there are two "elements" involved. What the second "element" is in the case of liquidgas system is poorly understood. TGD suggests the identification of the second "element" as magnetic flux tubes connecting the molecules. Their number per molecule varies and above critical number a phase transition to liquid phase would take place. Second old problem relates to the numerous anomalies of water (see the web pages of Martin Chaplin). I have discussed these anomalies from TGD viewpoint in (see this). The most wellknown anomalies relate to the behavior near freezing point. Below 4 degrees Celsius water expands rather than contracts as temperature is lowered. Also in the freezing an expansion takes place. A general TGD based explanation for the anomalies of water would be the presence of also dark phases with nonstandard value of Planck constant h_{eff}/h=n (see this). Combining this idea with the above proposal this would mean that flux tubes associated with hydrogen bonds can have also nonstandard value of Planck constant in which case the flux tube length scales like n. The reduction of n would shorten long flexible flux tubes to short and rigid ones. This reduce the motility of molecules and also force them nearer to each other. This would create empty volume and lead to an increase of volume per molecule as temperature is lowered. Quite generally, the energy for particles with nonstandard value of Planck constant is higher than for ordinary ones (see this). In freezing all dark flux tubes would transform to ordinary ones and the surplus energy would be liberated so that the latent heat should be anomalously high for all molecules forming hydrogen bonds. Indeed, for both water and NH_{3} having hydrogen bonds the latent heat is anomalously high. Hydrogen bonding is possible if molecules have atoms with lone electron pairs (electrons are not assignable to valence bonds). Lone electron pairs could form Cooper pairs at flux tube pairs assignable to hydrogen bonds and carrying the dark proton. Therefore also high T_{c} superconductivity could be possible. See the chapter Quantum Criticality and Dark Matter of "Hyperfinite factors, padic length scale hypothesis, and dark matter hierarchy" or the article Maxwell's lever rule and expansion of water in freezing: two poorly understood phenomena.

Superfluids dissipate!People in Aalto University  located in Finland by the way  are doing excellent work: there is full reason to be proud! I learned from the most recent experimental discovery by people working in Aalto University from Karl Stonjek. The title of the popular article is Friction found where there should be none—in superfluids near absolute zero. In rotating superfluid one has vortices and they should not dissipate. The researchers of Aalto University however observed dissipation: the finding by J. Mäkinen et al is published in Phys Rev B. Dissipation means that they lose energy to environment. How could one explain this? What comes in mind for an inhabitant of TGD Universe, is the hierarchy of Planck constants h_{eff} =n×h labelling a hierarchy of dark matters as phases of ordinary matter. The reduction of Planck constant h_{eff} liberates energy in a phase transition like manner giving rise to dissipation. This kind of burst like liberation of energy is mentioned in the popular article ("glitches" in neutron stars). I have already earlier proposed an explanation of fountain effect of superfluidity in which superfluid flow seems to defy gravity. The explanation is in terms of large value of h_{eff} implying delocalization of superfluid particles in long length scale (see this). Remark: Quite generally, binding energies are reduced as function of h_{eff}/h= n. One has 1/n^{2} proportionality for atomic binding energies so that atomic energies defined as rest energy minus binding energy indeed increase with n. Interestingly, dimension 3 of space is unique in this respect. Harmonic oscillator energy and cyclotron energies are in turn proportional to n. The value of n for molecular valence bonds depends on n and the binding energies of valence bonds decrease as the valence of the atom with larger valence increases. One can say that the valence bonds involving atom at the right end of the row of the periodic table carry metabolic energy. This is indeed the case as one finds by looking the chemistry of nutrient molecules. The burst of energy would correspond to a reduction of n at the flux tubes associated with the superfluid. Could the vortices decompose to smaller vortices with a smaller radius, maybe proportional to n? I have proposed similar mechanism of dissipation in ordinary fluids for more than two decades ago. Could also ordinary fluids involve hierarchy of Planck constants and could they dissipate in the same manner? In biology liberation of metabolic energy  say in motor action  would take place in this kind of "glitch". It would reduce h_{eff} resources and thus the ability to generate negentropy: this leads to smaller negentropy resources and one gets tired and thinking becomes fuzzy. See the chapter Quantum criticality and dark matter.

Condensed matter simulation of 4D quantum Hall effect from TGD point of viewThere is an interesting experimental work related to the condensed matter simulation of physics in spacetimes with D=4 spatial dimensions meaning that one would have D=1+4=5dimensional spacetime (see this and this). What is simulated is 4D quantum Hall effect (QHE). In Mtheory D= 1+4dimensional branes would have 4 spatial dimensions and also 4D QH would be possible so that the simulation allows to study this speculative higherD physics but of course does not prove that 4 spatial dimensions are there. In this article I try to understand the simulation, discuss the question whether 4 spatial dimensions and even 4+1 dimensions are possible in TGD framework in some sense, and also consider the general idea of the simulation higherD physics using 4D physics. This possibility is suggested by the fact that it is possible to imagine higherdimensional spaces and physics: maybe this ability requires simulation of highD physics using 4D physics. See the chapter Quantum Hall effect and Hierarchy of Planck Constants or the article Condensed matter simulation of 4D quantum Hall effect from TGD point of view.

Excitonpolariton BoseEinstein condensate at room temperature and heff hierarchyUlla gave in my blog a link to a very interesting work about BoseEinstein condensation of quasiparticles known as excitonpolaritons. The popular article tells about a research article published in Nature by IBM scientists. BoseEinstein condensation happens for excitonpolaritons at room temperature, this temperature is four orders of magnitude higher than the corresponding temperature for crystals. This puts bells ringing. Could h_{eff}/h=n be involved? One learns from Wikipedia that excitonpolaritons are electron hole pairs photons kick electron to higher energy state and exciton is created.These quasiparticles would form a BoseEinstein condensate with large number of particles in ground state. The critical temperature corresponds to the divergence of Boltzmann factor given by BoseEinstein statistics.
Is this possible?
For background see the chapter Criticality and dark matter. 