ABSTRACTS
OF 
PART I: HYPERFINITE FACTORS AND HIERARCHY OF PLANCK CONSTANTS 
Does TGD Predict the Spectrum of Planck Constants? The quantization of Planck constant has been the basic them of TGD since 2005. The basic idea was stimulated by the finding of Nottale that planetary orbits could be seen as Bohr orbits with enormous value of Planck constant given by hbar_{gr}= GM_{1}M_{2}/v_{0}, where the velocity parameter v_{0} has the approximate value v_{0}≈ 2^{11} for the inner planets. This inspired the ideas that quantization is due to a condensation of ordinary matter around dark matter concentrated near Bohr orbits and that dark matter is in macroscopic quantum phase in astrophysical scales. The second crucial empirical input were the anomalies associated with living matter. The recent version of the chapter represents the evolution of ideas about quantization of Planck constants from a perspective given by seven years's work with the idea. A very concise summary about the situation is as follows. Basic physical ideas The basic phenomenological rules are simple and there is no need to modify them.
Spacetime correlates for the hierarchy of Planck constants The hierarchy of Planck constants was introduced to TGD originally as an additional postulate and formulated as the existence of a hierarchy of imbedding spaces defined as Cartesian products of singular coverings of M^{4} and CP_{2} with numbers of sheets given by integers n_{a} and n_{b} and hbar=nhbar_{0}. n=n_{a}n_{b}. With the advent of zero energy ontology, it became clear that the notion of singular covering space of the imbedding space could be only a convenient auxiliary notion. Singular means that the sheets fuse together at the boundary of multisheeted region. The effective covering space emerges naturally from the vacuum degeneracy of Kähler action meaning that all deformations of canonically imbedded M^{4} in M^{4}×CP_{2} have vanishing action up to fourth order in small perturbation. This is clear from the fact that the induced Kähler form is quadratic in the gradients of CP_{2} coordinates and Kähler action is essentially Maxwell action for the induced Kähler form. The vacuum degeneracy implies that the correspondence between canonical momentum currents ∂L_{K}/∂(∂_{α}h^{k}) defining the modified gamma matrices and gradients ∂_{α} h^{k} is not onetoone. Same canonical momentum current corresponds to several values of gradients of imbedding space coordinates. At the partonic 2surfaces at the lightlike boundaries of CD carrying the elementary particle quantum numbers this implies that the two normal derivatives of h^{k} are manyvalued functions of canonical momentum currents in normal directions. Multifurcation is in question and multifurcations are indeed generic in highly nonlinear systems and Kähler action is an extreme example about nonlinear system. What multifurcation means in quantum theory? The branches of multifurcation are obviously analogous to single particle states. In quantum theory second quantization means that one constructs not only single particle states but also the many particle states formed from them. At spacetime level single particle states would correspond to N branches b_{i} of multifurcation carrying fermion number. Twoparticle states would correspond to 2fold covering consisting of 2 branches b_{i} and b_{j} of multifurcation. Nparticle state would correspond to Nsheeted covering with all branches present and carrying elementary particle quantum numbers. The branches coincide at the partonic 2surface but since their normal space data are different they correspond to different tensor product factors of state space. Also now the factorization N= n_{a}n_{b} occurs but now n_{a} and n_{b} would relate to branching in the direction of spacelike 3surface and lightlike 3surface rather than M^{4} and CP_{2} as in the original hypothesis. Multifurcations relate closely to the quantum criticality of Kähler action. Feigenbaum bifurcations represent a toy example of a system which via successive bifurcations approaches chaos. Now more general multifurcations in which each branch of given multifurcation can multifurcate further, are possible unless on poses any additional conditions. This allows to identify additional aspect of the geometric arrow of time. Either the positive or negative energy part of the zero energy state is "prepared" meaning that single nsubfurcations of Nfurcation is selected. The most general state of this kind involves superposition of various nsubfurcations. 
Negentropy Maximization Principle and TGD Inspired Theory of Consciousness In TGD Universe the moments of consciousness are associated with quantum jumps between quantum histories. The proposal is that the dynamics of consciousness is governed by Negentropy Maximization Principle (NMP), which states the information content of conscious experience is maximal. The formulation of NMP is the basic topic of this chapter. NMP codes for the dynamics of standard state function reduction and states that the state function reduction process following Uprocess gives rise to a maximal reduction of entanglement entropy at each step. In the generic case this implies at each step a decomposition of the system to unique unentangled subsystems and the process repeats itself for these subsystems. The process stops when the resulting subsystem cannot be decomposed to a pair of free systems since energy conservation makes the reduction of entanglement kinematically impossible in the case of bound states. The natural assumption is that self loses consciousness when it entangles via bound state entanglement. There is an important exception to this vision based on ordinary Shannon entropy. There exists an infinite hierarchy of number theoretical entropies making sense for rational or even algebraic entanglement probabilities. In this case the entanglement negentropy can be negative so that NMP favors the generation of negentropic entanglement (NE), which is not bound state entanglement in standard sense since the condition that state function reduction leads to an eigenstate of density matrix requires the final state density matrix to be a projection operator. NE might serve as a correlate for emotions like love and experience of understanding. The reduction of ordinary entanglement entropy to random final state implies second law at the level of ensemble. For the generation of NE the outcome of the reduction is not random: the prediction is that second law is not a universal truth holding true in all scales. Since number theoretic entropies are natural in the intersection of real and padic worlds, this suggests that life resides in this intersection. The existence effectively bound states with no binding energy might have important implications for the understanding the stability of basic biopolymers and the key aspects of metabolism. A natural assumption is that self experiences expansion of consciousness as it entangles in this manner. Quite generally, an infinite self hierarchy with the entire Universe at the top is predicted. There are two options to consider. Strong form of NMP, which would demand maximal negentropy gain: this would not allow morally responsible free will if ethics is defined in terms of evolution as increase of NE resources. Weak form of NMP would allow self to choose also lowerdimensional subspace of the projector defining the final state subspace for strong form of NMP. Weak form turns out to have several highly desirable consequences: it favours dimensions of final state space coming as powers of prime, and in particular dimensions which are primes near powers of prime: as a special case, padic length scale hypothesis follows. Weak form of NMP allows also quantum computations, which halt unlike strong form of NMP. Besides number theoretic negentropies there are also other new elements as compared to the earlier formulation of NMP.

PART II: PADIC LENGTH SCALE HIERARCHY AND DARK MATTER HIERARCY

Nuclear String Hypothesis Nuclear string model in form discussed in this chapter allows now to understand both nuclear binding energies of both A>4 nuclei and A≤4 nuclei in terms of three fractal variants of QCD. The model also explains giant resonances and so called pygmy resonances in terms of decoherence of BoseEinstein condensates of exotic pion like color bosons to subcondensates. 1. Background Nuclear string hypothesis is one of the most dramatic almostpredictions of TGD. The hypothesis in its original form assumes that nucleons inside nucleus organize to closed nuclear strings with neighboring nuclei of the string connected by exotic meson bonds consisting of color magnetic flux tube with quark and antiquark at its ends. The lengths of flux tubes correspond to the padic length scale of electron and therefore the mass scale of the exotic mesons is around 1 MeV in accordance with the general scale of nuclear binding energies. The long lengths of em flux tubes increase the distance between nucleons and reduce Coulomb repulsion. A fractally scaled up variant of ordinary QCD with respect to padic length scale would be in question and the usual wisdom about ordinary pions and other mesons as the origin of nuclear force would be simply wrong in TGD framework as the large mass scale of ordinary pion indeed suggests. The presence of exotic light mesons in nuclei has been proposed also by Chris Illert based on evidence for charge fractionization effects in nuclear decays.
2. A>4 nuclei as nuclear strings consisting of A< 4 nuclei During last weeks a more refined version of nuclear string hypothesis has evolved.
3. BoseEinstein condensation of color bonds as a mechanism of nuclear binding The attempt to understand the variation of the nuclear binding energy and its maximum for Fe leads to a quantitative model of nuclei lighter than Fe as color bound BoseEinstein condensates of ^{4}He nuclei or rather, of pion like colored states associated with color flux tubes connecting ^{4}He nuclei.
Giant (dipole) resonances and so called pygmy resonances interpreted in terms of decoherence of the BoseEinstein condensates associated with A≤ 4 nuclei and with the nuclear string formed from A≤ 4 nuclei provide a unique test for the model. The key observation is that the splitting of the BoseEinstein condensate to pieces costs a precisely defined energy due to the n^{2} dependence of the total binding energy.
A speculative picture proposing a connection between homeopathy, water memory, and phantom DNA effect is discussed and on basis of this connection a vision about how the tqc hardware represented by the genome is actively developed by subjecting it to evolutionary pressures represented by a virtual world representation of the physical environment. The speculation inspired by this vision is that genetic code as well as DNA, RNA and aminoacid sequences should have representation in terms of nuclear strings. The model for dark baryons indeed leads to an identification of these analogs and the basic numbers of genetic code including also the numbers of aminoacids coded by a given number of codons are predicted correctly. Hence it seems that genetic code is universal rather than being an accidental outcome of the biological evolution. 
SuperConductivity in ManySheeted SpaceTime In this chapter a model for high T_{c} superconductivity as quantum critical phenomenon is developed. The relies on the notions of quantum criticality, dynamical quantized Planck constant requiring a generalization of the 8D imbedding space to a book like structure, and manysheeted spacetime. In particular, the notion of magnetic flux tube as a carrier of supra current of central concept. With a sufficient amount of twisting and weaving these basic ideas one ends up to concrete model for high T_{c} superconductors as quantum critical superconductors consistent with the qualitative facts that I am personally aware. The following minimal model looks the most realistic option found hitherto.
At qualitative level the model explains various strange features of high T_{c} superconductors. One can understand the high value of T_{c} and ambivalent character of high T_{c} super conductors, the existence of pseudogap and scalings laws for observables above T_{c}, the role of stripes and doping and the existence of a critical doping, etc...

Quantum Hall effect and Hierarchy of Planck Constants In this chapter I try to formulate more precisely the recent TGD based view about fractional quantum Hall effect (FQHE). This view is much more realistic than the original rough scenario, which neglected the existing rather detailed understanding. The spectrum of ν, and the mechanism producing it is the same as in composite fermion approach. The new elements relate to the not so wellunderstood aspects of FQHE, namely charge fractionization, the emergence of braid statistics, and nonabelianity of braid statistics.

A Possible Explanation of Shnoll Effect Shnoll and collaborators have discovered strange repeating patterns of random fluctuations of physical observables such as the number n of nuclear decays in a given time interval. Periodically occurring peaks for the distribution of the number N(n) of measurements producing n events in a series of measurements as a function of n is observed instead of a single peak. The positions of the peaks are not random and the patterns depend on position and time varying periodically in time scales possibly assignable to EarthSun and EarthMoon gravitational interaction. These observations suggest a modification of the expected probability distributions but it is very difficult to imagine any physical mechanism in the standard physics framework. Rather, a universal deformation of predicted probability distributions would be in question requiring something analogous to the transition from classical physics to quantum physics. The hint about the nature of the modification comes from the TGD inspired quantum measurement theory proposing a description of the notion of finite measurement resolution in terms of inclusions of so called hyperfinite factors of type II_{1} (HFFs) and closely related quantum groups. Also padic physics another key element of TGD is expected to be involved. A modification of a given probability distribution P(n λ_{i}) for a positive integer valued variable n characterized by rationalvalued parameters λ_{i} is obtained by replacing n and the integers characterizing λ_{i} with so called quantum integers depending on the quantum phase q_{m}=exp(i2π/m). Quantum integer n_{q} must be defined as the product of quantum counterparts p_{q} of the primes p appearing in the prime decomposition of n. One has p_{q}= sin(2π p/m)/sin(2π/m) for p ≠ P and p_{q}=P for p=P. m must satisfy m≥ 3, m≠ p, and m≠ 2p. The quantum counterparts of positive integers can be negative. Therefore quantum distribution is defined first as padic valued distribution and then mapped by so called canonical identification I to a real distribution by the map taking padic 1 to P and powers P^{n} to P^{n} and other quantum primes to themselves and requiring that the mean value of n is for distribution and its quantum variant. The map I satisfies I(∑ P_{n})=∑ I(P_{n}). The resulting distribution has peaks located periodically with periods coming as powers of P. Also periodicities with peaks corresponding to n=n^{+}n^{}, n^{+}_{q}>0 with fixed n^{}_{q}< 0. The periodic dependence of the distributions would be most naturally assignable to the gravitational interaction of Earth with Sun and Moon and therefore to the periodic variation of EarthSun and EarthMoon distances. The TGD inspired proposal is that the pdic prime P and integer m characterizing the quantum distribution are determined by a process analogous to a state function reduction and their most probably values depend on the deviation of the distance R through the formulas Δ p/p≈ k_{p}Δ R/R and Δ m/m≈ k_{m}Δ R/R. The padic primes assignable to elementary particles are very large unlike the primes which could characterize the empirical distributions. The hierarchy of Planck constants allows the gravitational Planck constant assignable to the spacetime sheets mediating gravitational interactions to have gigantic values and this allows padicity with small values of the padic prime P. 