Today I asked AI Dream Studio – http://beta.dreamstudio.ai/ about it, here is it’s interpretation. Pretty nice. I actually think it captures some elements of the theory.
Archives For physics
While the physics media, popular opinion and generally accepted lecturers says things like ‘GR is wrong because singularities’, the physical and theoretical facts suggest very strongly the opposite.Continue Reading...
This is obviously straight from the hip, although I have been thinking about it for a while. ΛCDM (Lambda cold dark matter) or Lambda-CDM has a lot of problems, but MOND does too. See for example Hossenfelders latest video
So my admittedly personal view is that matter cannot exist on its own. One hydrogen molecule will start to sleep if it’s not near other atoms. How near? The thought is that dark matter densities can tell us.
Matter cannot exist on its own, isolated. It needs a certain density of quantum waves or energy to bathe in. Otherwise the entire mechanism of both electromagnetism, quantum mechanics (and maybe nuclear forces) simply dies, the interactive particles (quarks and electrons) that form matter relax into sleeping versions of themselves, likely with virtually all of the mass intact. When in the presence of normal matter or the density of sleeping matter goes up to maybe something like an extremely diffuse gas cloud, the matter wakes up, and starts to take part in electromagnetic interactions. Indeed, the nuclear forces of this sleepy matter do not have to sleep, as we can’t see sleeping nucleons. Perhaps just the EM interaction drops off. This article explores some predictions and consequences of the Sleepy Matter Model.
Dark matter is sleepy matter, and dark energy is the ‘quantum energy’ – the dark energy is released when matter ‘goes to sleep’.
Dark matter is just plain matter, but it has ‘spun down’ due to being lonely. This effect happens at about the maximum density of dark matter ever found, or about the density of the most diffuse clouds of gas ever found (which are about equal).
Dark Matter Problems
There are a number of problems with the ΛCDM model (Lambda cold dark matter).
I will refer to as normal dark matter as Dark Matter.
Wikipedia is a good place to start for most of these problems. One can see the Bullet cluster below, which is both a problem and a victory for Dark Matter. That’s where we sit.
|Dark Matter Problem||Problem||Sleepy Matter solution|
|Satellite galaxies||Models of DM predict lots of Satellite galaxies. More are being found, but they tend to be equatorial to the galaxy, which is another problem.||Sleepy matter interacts with density rise on galactic plane and gets stuck there as the core of a satellite galaxy. Should be able to model this.|
|Baryonic Tully Fisher relation||The mass of a galaxy is correlated to the fourth power of the rotation velocity of the outermost stars. Why would Dark Matter, which ignores regular matter obey this rule?||Sleepy matter wakes up when the density gets high, turning into normal matter. This normal matter interacts, limiting density, etc. There is only so much sleepy matter to fit in. |
Should be able to model this.
|Renzo’s rule||“For any feature in the luminosity profile there is a corresponding feature in the rotation curve and vice versa.”||This is simple with sleepy matter. Stars are born when sleepy matter wakes up, condenses.|
|Bullet Cluster||Galactic velocities are too high.||The braking friction power of dark matter is lost as the sleeping matter can’t get near luminous matter to put the brakes on.|
|Core-Cusp||No dark matter found (via gravitational searches) in the cores of galaxies.||The sleepy matter comes in, get woken up, interacts, and thus does not sink into the centre.|
|Why Dark Matter||No reason for it, its just another set of parameters.||Sleepy matter is an experimental prediction of both matter and fields arising from the Einstein’s ether, and thus is predicted.|
Dark matter victories
Any new solution to the dark matter problem would hopefully not touch the success of the dark matter paradigm
|Dark Matter Victory||Explanation||Sleepy matter comment|
|Galaxy Clusters||The virial theorem says galactic clusters have dark matter holding them together. Lots of it.||Not a problem, since this intergalactic sleepy matter behaves just like dark matter.|
|Einstein rings, gravitational lensing||The pretty pictures of Einstein rings, carefully measured, show much more mass around a galaxy than is in it that we can see.||Not a problem, as the sleepy matter is at low densities when the entire halo is taken into account.|
|Early universe 2nd peak and all that.||The explanations of the CMB multipole work well with Dark Matter.||One might think that at early universe times, all the matter was awake, which would not be good for the model, but on the other hand, BBN troubles in the early U combined with a much different interaction scheme for matter and dark energy would change things. |
A 20 parameter LCDM cosmological model with 1000 PhDs and tens of thousands of papers will fit anything.
|Bullet Cluster||The dark matter from two colliding galaxies sailed right through each other. The gravitational field shows 1) lots of dark matter and 2) It did not interact like the regular stuff to the collision.||Sleeping matter can run right through each other, as long as the critical density is missed. So one gets both the correct density profile and lots of star formation, etc.|
Sleepy matter predictions
Here are some predictions for sleepy matter. Some of the tests can be done today with ‘only’ a literature search and some graphing tools. Others require labs that likely can’t be built on earth or in the near future.
|Sleepy matter can’t be detected in current experiments.||The sleeping matter is ~all woken up by the time it gets to a lab on earth.||More negative results looking for WIMPs, Axions, etc etc. So far 30 years of bright people have looked for dark matter, mostly by going deep into the earth.|
|Sleepy matter might be detectable in a new kind of experiment.||Perhaps we can simply watch matter fall apart.||Maybe a (deep space?) lab with a large, cold dark room can make a rarefied gas sleep. Could be detected by lighting up a gas at some emission line as it’s pumped down in pressure. Maybe the matter will start to sleep as the pressure drops. Make a graph of pressure as measured by some direct method, and pressure measured by the emission of the atoms in the gas on excitation pulses at one per hour.|
|Clouds of dark matter have a maximum density.||Maybe ordinary matter gas clouds have a minimum observed density already?||Extensive literature search for gas densities measured around our galaxy, combines with literature search for dark matter densities. Do the distributions overlap? I am thinking they don’t overlap, to within the statistics of astronomy.|
|Sleepy matter on waking up might have some emission||Perhaps on waking up/sleeping the spinup produces some sort of weak photon emission, maybe in infrared or radio, or even higher frequencies.||Unexplained sky maps showing emission of photons in at places where the dark matter density is high.|
|Sleepy matter going to sleep raises the dark energy level.||Planck – Supernovae Hubble tension.||The effect may be subtle, but overall it would seem that more matter is becoming sleepy than the other way around. This releases energy into space. The energy was bottled up as some part of matter, then it gets released. Perhaps most of the dark energy came from matter going to sleep, in which case we would need a huge mass/energy drop of like 90% for sleepy matter. But maybe the sleepy matter energy exchange is only a small part of the dark energy story.|
Sleepy Matter thoughts
Is Sleepy Matter worthwhile? I like it, but it will take some more effort to put it into the ‘this contributes’ category. I have looked for papers on the density of DM vs the density of gas clouds, but I don’t think there are any. The gas cloud people and dark matter density people run in different circles.
I am going to try and dig up the references/papers I can find on dark matter and gas cloud density measurements.
Dark matter density tops out at about 10 GeV/cm^3 in the Milky way according to Figure 1 in
M. Weber and W. de Boer
Is dark matter any more dense anywhere else?
Also see this:
Note this image:
Note that I just of something: Say some sleepy matter condenses out, then gets moved away condensed into new stars, etc. There would be gas clouds lighter than the dark matter limit.
I presented at the APS 2021 2021
The recent experimental proposals by Bose et al. and Marletto et al. (BMV) outline a way to test for the quantum nature of gravity by measuring the gravitationally induced differential phase accumulation over the superposed paths of two ∼10^-14kg masses.
This work predicts the outcome of the BMV experiment in Bohmian trajectory gravity – where classical gravity is assumed to couple to the particle configuration in each Bohmian path, as opposed to semi-classical gravity where gravity couples to the expectation value of the wave function, or of quantized gravity, where the gravitational field is itself in a quantum superposition.
In the case of the BMV experiment, Bohmian trajectory gravity predicts that there will be quantum entanglement. This is surprising as the gravitational field is treated classically.
Faster than light – but not with spaceships, particles, or transverse wave signals may be possible if spacetime is similar to a slightly viscous fluid. Pressure waves in general relativity may move faster than light.
There have been a few papers written over the years modelling Einstein’s ether as an elastic solid. I have been reading these papers:
So – lots of stuff about the ether as a solid.
A few problems with this approach – you can see one paper coming up with Young’s modulus varying with frequency (McDonald), and others struggling with how to even support transverse waves in this elastic medium. A key measure of a substance is its Poisson’s ratio – which is an elasticity measure. The semi consensus is that this ratio is 1 for the ether, which is not like any normal material (but OK spacetime is not a normal material!).
One thing about materials is that they in general support two kinds of waves ‘P-waves’ (pressure waves) and ‘S-waves’ (shear waves). Choosing Poisson’s ratio as 1 leads to P-waves having a speed of 0! Which is ‘required’ as everyone knows that p-waves can’t exist in general relativity. I agree that p-waves can’t be made in GR using normal matter moving around, but see this paper http://arxiv.org/abs/astro-ph/0309448 to get an idea of how one might generate monopole wave action.
There seems to be a lot of hand waving going on in these papers about thin plates, absolute length scales (Planck length chosen), and more just to get things to work out.
Since I’m an optimist at heart, I decided to look at this from another direction. What if Einstein’s ether was more like a fluid? Fluids have Poisson’s ratio of about 1/2, and only support shear waves if there is viscosity to the fluid. So lets let our fluid have a Poission’s ratio of just shy of 0.5, say one part in 10^14 away from 0.5, and a see what happens.
Here is what happens: Faster than light effects – the fluid of spacetime is extremely incompressible, and has a very small Young’s modulus.
I’ll quote a section of the Tenev-Horstemeyer paper here:
Run the calculations for µ and M, we get µ = Y/3 and M = 10^14 times Y, so the pressure waves in this fluid ether would travel at 10^7 (square root) times faster than c. (There is no experiment or theory describing the viscosity of Einsteins ether at this point, the 10^14 delta is for illustration only).
This huge pressure wave speed would not be seen in experiments as the paragraph points out – all known waves that propagate in real space are transverse. I think that the paper makes the mistake of assuming that because all we have measured are transverse waves, that those are the only kind that exist! Pressure waves in general relativity would be hard to generate it would seem, since one would have to pulsate spacetime.
So how would we generate these monopole waves? If we simply shoot matter on and off a planet, we will generate ‘dragged along’ monopole waves, which would travel at light speed (or less) with the matter.
One way to make superluminal p-waves is of course with the physicists favourite friend, the magic wand. Magic wands have been used in theoretical physics to create extra dimensions, multi-universes, etc. Here I only invoke it to make matter disappear, in a periodic pattern. For a concrete example, assume fundamental particles are varying in mass (imagine some worm hole mechanism) at their Compton frequency. Then we would have these pressure waves at fantastic velocity around them, exchanging information with their surroundings, in a de -Broglie or Madelung way. This would help quantum mechanics emerge from spacetime, something I have been searching for over several decades.
I don’t think that this is a possible idea simply because I wish there to be a way to communicate at velocities above c, or that it helps with a realistic model for quantum mechanics, I also think its a simpler way to look at Einstein’s ether than with the ‘closely packed’ layers of manifolds that the solid models quoted above mostly assume.
It seems that this bulk modulus pressure wave velocity being orders of magnitude faster than c might mean that there is a preferred frame for p-wave speed in the Universe. Lorentez transformations and the constancy of the speed of light measurements would presumably stay the same as they are now, as this fluid would simply be a way to generate the Einstein field equations.
Could a bulk modulus and Poisson’s ratio allowing for super-luminal p-waves replace inflation? One of the big reasons for inflation is that the universe is too smooth – given the paltry speed of light, places far from each other should have different temperatures, etc. https://www.newscientist.com/term/cosmic-inflation/
There are many people who think inflation is a silly crutch.
Here is a new story in Scientific American about ‘strange results’ from Nanograv. Could these be signs of longitudinal gravitational waves? The arXiv papers referenced point out that the observed signal has no quadropole signature, which is part of the ‘weird’ results. https://www.scientificamerican.com/article/galaxy-size-gravitational-wave-detector-hints-at-exotic-physics/
Does Pizzella’s experiment violate causality?
The idea about electromagnetic interactions being
composed of both instantaneous (bound) and retarded (radiation) parts is not new. It was
repeatedly expressed theoretically [3, 4, 5], and electromagnetic superluminal effects were seen
in experiments as well [6, 7, 8].
Every old style, Newtonian theory in modern physics – which is all of them except General Relativity, do not fit well with GR itself. This is curious, as for instance the Dirac equation, the Standard Model, QM, QFT all work well with each other (hence the Standard Model). In an attempt to unify everything else with GR, the well worn (almost proven impossible by now one would think) trail is to quantize GR on some perturbed Minkowski space.
It doesn’t work. Or rather has not worked.
Since it’s virtually impossible to prove that something can’t be done in physics (see von Neuman’s ‘no hidden variables proof’ as an example), we are left with hundreds of PhDs per year being granted trying to add another brick to a wall that is sinking in mud, hoping that the mud is only so deep, so that another few thousand postdocs life efforts piled up will hit the rock bottom.
It won’t. It’s pure folly.
An alternative is what I present on this site, namely that one can and indeed must build on General Relativity – that in a very real sense all future successful theories will be phenomena inside the Riemannian manifold controlled by the Einstein Equations that we live in.
Examples provided on this site show how one can make electric fields, quantum waves and particles from nothing more than GR. Of course, it’s a minority viewpoint, one I’m willing to stand on.
In this essay I argue for the case of simply trying, in the sense of a toy model, to build parts of the universe out of nothing more than 4D, standard Einstein General Relativity. Its already the norm for a postdoc to spend a decade looking at some 2D toy model of a field that is known not to be able to work, just because it’s easier to do some calculations.
But apparently doing the same thing with a model (4D GR) that we know works extremely well is, well wrong, boring and silly.
I don’t think so.
Physics needs new trial balloons. To the fundamental physics establishment – you can’t actually pop a balloon unless you at least get it in front of you.
22nd International Conference on General Relativity and Gravitation
What the paper and poster argue is that in the BMV experiment, observing entanglement is not enough to show that gravity is quantum. I do this by showing that a classical gravitational field coupled to the Bohmian trajectories of the individual particles will show entanglement.
The conference looks like its going to be interesting to attend.
The image at the top shows 4 runs of the BMV experiment, with all 4 Bohmian particle trajectory combinations shown. There is entanglement generated 1/4 of the time, when the experiment happens to look like the 2nd diagram from the left.
The poster is 90 x 200cm, available in real 3D, if you visit Valencia From July 7-12 2019 :-)
The BMV experiment sets out to show that gravity is quantized. If gravity is quantized, we expect to be able to form a gravitational field into a superposition, so that fundamentally the gravitational field is not certain at one spacetime point. Trying to come up with a theory of gravity that can be in a quantum superposition, while still working for all present tests of Einstein’s General Relativity has proved impossible so far, despite thousands of very smart people working over 50 years on the problem.
Perhaps gravity cannot be quantized. With Bohmian trajectory gravity, gravity is not quantized and has a well-defined connection to the sub-atomic particles.
If gravity is not quantized, all sorts of assumptions about quantum mechanics suddenly fail, as an unquantized gravity allows one to cheat behind the back of quantum mechanics. This is a large part of the reason why many people think gravity must be quantized. I’m not in the gravity must be quantized group, mostly because I think it just won’t work.
as revealed by J.S. Bell and experimental results.
Consider the following facts.
- The experimental record shows that the Lorentz transformations and special and general relativity all work remarkably well, from galaxies and indeed the structure of the Universe on down to scales probed by CERN.
- Locality is demonstrated in virtually all experiments conducted to date. This holds across fields such as fluid dynamics, radiation fields, etc. We have local causality.
- Quantum experiments such as those done by Aspect and later show that not everything is local – we have non-local effects. The wavefunction collapse is instant, etc. This worried Einstein.
Given the above facts, the simplest spacetime I can come up with looks roughly like this:
So this spacetime, which is not new, ( Wheeler had similar ideas), seems to cover our knowledge about the logical structure of quantum mechanics and general relativity. Until someone comes up with something better, this is what I use.
Causality vs locality vs non-signalling
- Spacetime is locally casual. Einsteins equations show us how things need to touch (with light or gravity waves, etc) in order to interact.
- Locality is therefore local in nature only.
- non-signalling holds today, but there seems to be no reason for it. We have a connection, we just need to figure out how to use it.
- Spacetime is multiply connected, which means it is not globally localized. Events can interact outside of their past light cones.
This is how our universe operates: we feel everything locally causal. But experiment shows some non-local (in the global sense) connections.
I think that the biggest news in a while in quantum mechanics is newly forming ability of experimenters to do quantum experiments with gravity. A fine example of an experiment already done is Phase Shift in an Atom Interferometer due to Spacetime Curvature across its Wave Function by Asenbaum et al. They conclude:
Therefore, the phase shift of this interferometer is not determined by the local acceleration along a single populated trajectory, demonstrating that the atomic wavefunction is a nonlocal probe of the spacetime manifold .
Thus they have experimentally shown that wave functions feel gravity pretty much where they ‘are’ in real space ( try not to think of configuration space at this point! ). No one really doubted this would happen. Still, it leads one to wonder what about the other side – the backreaction – to this. Do the atoms in the Asenbaum experiment source gravity in the same way they detect it? It would seem obvious that they should, but no one has done an experiment to verify this (see later in this article).
A proposal in the opposite spirit to the above results is given by Kafri, Taylor, and Milburn (KTM) in A classical channel model for gravitational decoherence. KTM posits a way for the gravity to be sourced as follows:
That is, the gravitational centre of mass coordinate,xi, of each particle is continuously measured and a classical stochastic measurement record, Jk(t), carrying this information acts reciprocally as a classical control force on the other mass.
In other words in the KTM model, the source and detection channels for a particle are both as in semi-classical gravity. The expectation value of the particle’s is the mass location for both source and detection.
You can sense that the Asenbaum experiment shows KTM does not work – the experiment shows that atom, which is in a dual humped wave function with a separation of centimeters cannot be seeing only the average field – the wave function senses the curvature. The paper by Altamirano, Corona-Ugalde, Mann, and Zych Gravity is not a Pairwise Local Classical Channel , confirm these feelings about KTM – like theories. They don’t work.
Here we show that single-atom interference experiments achieving large spatial superpositions can rule out a framework where the Newtonian gravitational inter-action is fundamentally classical in the information-theoretic sense: it cannot convey entanglement. Specifically, in this framework gravity acts pairwise between massive particles as classical channels, which effectively induce approximately Newtonian forces between the masses.
So gravity is not truly semi-classical. No surprise to me, or to the quantum gravity workers (LQG, String Theory, etc). What many/most quantum gravity people like to think, however, is that KTM or similar (Diosi – Penrose), Rosenfeld like semi-classical gravity basically exhaust the spectrum of classical gravity theories.
The BMV Experimental Proposals
These proposed experiments are in some ways similar to the Asenbaum experiment described above, but instead of atoms, small particles like micro diamonds are prepared in position-dependent superpositions, and instead of a huge mass of lead, two diamonds are dropped near each other, so they can feel the gravitational effect of the other also in a position superposition diamond. The promise of these experiments is tremendous – if successful they might show that gravity is quantized: Christodoulou and Rovell state
...detecting the [BMV] effect counts as evidence that the gravitational field can be in a superposition of two macroscopically distinct classical fields and since the gravitational field is the geometry of spacetime (measured by rods and clocks), the BMV effect counts as evidence that quantum superposition of different spacetime geometries is possible, can be achieved..
A problem I see in these BMV papers is that they all use the predictions of semi-classical theories (not KTM but semiclassical as a source only) as a classical test case, without much thought to the predictions of other ‘classical’ theories of gravity. The possibilities are many and the experimental consequences are not simple.
Bohmian Trajectories and General Relativity
There have been some papers over the years touting the usefulness of the Bohmian trajectory viewpoint as a better approximation to classical field – quantum system interaction. Usually, the case for using Bohmian trajectories is one of computational or conceptual efficiency, but as Ward Struve in Semi-classical approximations based on Bohmian mechanics puts it:
Finally, although we regard the Bohmian semi-classical approximation for quantum gravity as an approximation to some deeper quantum theory for gravity, one could also entertain the possibility that it is a fundamental theory on its own. At least, there is presumably as yet no experimental evidence against it.
The BMV experiment with Bohmian trajectories
The interpretation of the BMV experiment if one assumes Bohmian trajectories are ‘real’ results in the following conclusions:
- Each run of the experiment has particles in any one of 4 configurations, – the trajectories.
- There is no superposition of gravitational fields – each run has a different gravitational field configuration.
- The resulting experimental statistics show entanglement – even though gravity is classical throughout.
The last point is the most surprising. We look at why an experimenter will see entanglement with Bohmian trajectories.
At the heart of the argument is the fact that while these Bohmian trajectories look very classical, they are actually quantum – more clearly subquantum aspects of (Bohm/de Broglie) quantum theory. So we have a situation where we can get behaviour very similar – ( i.e. showing entanglement ) to quantum gravity for the BMV experiment by using classical gravity coupled to Bohmian trajectories, where there is a superposition of gravitational fields – but only in the boring classical histories of the experiment viewpoint. Since the experimenter has only histories to look at, showing that the gravitational field was in a superposition requires more than merely observing some level of entanglement in the BMV experiment.
This is a paper version of the poster I presented at EmQM17 in London.
Some physicists surmise that gravity lies outside of quantum mechanics. Thus theories like the standard semiclassical theory of quantum to gravity coupling (that of Rosenfeld and Møller) are possible real models of interaction, rather than a mere approximation of a theory of quantum gravity. Unfortunately, semiclassical gravity creates inconsistencies such as superluminal communication. Alternatives by authors such as Diósi, Martin, Penrose, and Wang often use the term ’stochastic’ to set themselves apart from the standard semiclassical theory. These theories couple to fluctuations caused by for instance continuous spontaneous localization, hence the term ’stochastic’. This paper looks at stochastic gravity in the framework of a class of emergent or ontological quantum theories, such as those by Bohm, Cetto, and de Broglie. It is found that much or all of the trouble in connecting gravity with a microscopic system falls away, as Einstein’s general relativity is free to react directly with the microscopic beables. The resulting continuous gravitational wave radiation by atomic and nuclear systems does not, in contrast to Einstein’s speculation, cause catastrophic problems. The small amount of energy exchanged by gravitational waves may have measurable experimental consequences. A very recent experiment by Vinante et al. performed on a small cantilever at mK temperatures shows a surprising non-thermal noise component, the magnitude of which is consistent with the stochastic gravity coupling explored here.
I have made a simple calculator to calculate the flux in watts per square metre of gravitational waves given a frequency and a strain. The idea is to show how easy it would be to hide cosmologically important amounts of energy in high frequency gravitational waves.
If we take values of a strain of 15 orders of magnitude lower than LIGOs sensitivity, and a frequency of the Compton frequency, we get levels of energy flux and density that are very surprising. No one talks about this, though, since HFGWs are ‘known’ not to exist. I posit that we should not assume anything about gravitational waves at this point. Its an obvious place for experimentalists to work in. Are there any experiments that can detect gravitational radiation at millions of watts per square meter and nuclear frequencies? This is something that experiments should decide.
Just think about it – there is no way we can tell – there may be billions of watts of gravitational wave energy passing through your body right now. They may be there, waiting for us to find them.
The comments on dark energy and dark matter in the calculator are to be interpreted as follows:
How can ‘dark matter’ be gravitational wave energy?
Dark Matter is measured as an excess of mass/energy – as it’s presence is determined by gravitational effects on regular matter. In fact- experimentally, dark matter is too tied to matter – one can predict the amount of dark matter in a galaxy or galaxy cluster, etc by simply writing down the total mass distribution of baryons! What we know of dark matter is that it’s weakly coupled to matter and that it’s much denser than the level of dark energy that is spread throughout the universe.
A possible scenario:
Dark Matter is gravitational waves associated with matter. Call it DarkGW. It looks like the presence of matter controls the amount of dark matter present and DarkGW interacts very weakly with matter (perhaps not in a linear fashion?), perhaps even violatiing the rules of quantum mechanics – after all there is no quantum theory of gravity yet.
In this scenario, dark energy is the ‘leaking’ of this DarkGW into intergalactic space. Thus there is a source for DE and it does not have to have a transcendental source. Its ‘just’ regular radiation – radiation that does not redshift as the Universe ages, as the redshifted bits are replaced on a continual basis by the DarkGW.
This tells us why the amount of DarkGW is related to the amount of Dark Energy (why are they within a factor of two of each other?). As the DarkGW has leaked out, the Universe has expanded. Once the galaxies start to get cold and far apart (say in 200billion years) – the dark energy would start to redshift, and the Universe would approach a ‘balance point’ universe instead of a runaway expansion as in modern LCDM.
Something is definitely wrong with dark energy:
Riess says that it could be caused by hypothetical “sterile neutrinos”, interactions with dark matter, or a strengthening over time of dark energy (which accelerates the universe’s expansion).
Sterile Neutrinos are a last ditch effort to keep dark energy as a parameter (Lambda) in Einstein’s equations. Its clear to me that the best answer is that dark energy is getting stronger over time. Dark Energy is on the right side of the Einstein equations, not the left. Lambda was a mistake. Its zero.
New Parallaxes of Galactic Cepheids from Spatially Scanning the Hubble Space Telescope: Implications for the Hubble Constant
The problem with dark matter in galaxies is that it’s just too organized. Dark matter seems to correlate too well with matter distributions.
What about a field associated with every nucleon that saturates at some level, called S here. (its saturated near dense matter like here on Earth or in the Milky Way plane, while when protons/neutrons get below a certain density, the field then eventually drops as the density of matter drops.
This solves the cuspy galaxy core problem, it also makes the BTFR (Baryonic Tully Fisher Relation) work, it also seems it would work on the bullet cluster, galactic clusters, etc.
BTFR – explained
The Baryonic Tully Fisher Relation is one of the most accurate relations in cosmology. It’s a huge thorn in the side of LCDM cosmology, since dark matter and regular matter are not supposed to be in lockstep with one another. See Stacy McGaugh
With the S field presented here, there is a saturable field associated with every nucleon. When nucleons are about a mm apart or less on average, the S field is at some standard strength, then as the density lowers more, the S field maintains its density (or slowly loses density) until at some limiting low matter density this saturable field S starts to drop. By the point this happens there is ~100x more energy in the saturable field S than the baryonic density.
This effect can explain the BTFR as dark matter is present in quantities as a function of baryonic mass in some fashion.
The bullet cluster poses a problem for MOND like theories – there seems to be excess dark matter causing lensing. So dark matter really exists, it seems. The S field solves this nicely. No one thinks that the lensing areas of the Bullet cluster are completely free of matter, it’s just that the dark matter is not located where the bulk of the matter is.
Clusters of galaxies would not hold together without about 5 times the mass of the individual galaxies available between the galaxies to hold the galaxies together. See Galaxy clusters prove dark matter’s existence as an intro by Ethan Siegel .
The mechanism is clear – the intergalactic dust and gas provide the framework to energize a large S field which keeps the cluster gravitationally bound.
Properties of the S field
The S field is associated with matter and has a limiting high density. At low densities (ie halo galactic densities) the field has a mass of up to about 10 (or 100?) times the mass of a nucleon, per nucleon.
Where does the energy come from? It’s dark energy – just clumped. So S is dark energy. There is always a flow of it running around, and its pulled from dark energy as needed to saturate around matter.
What is the form of the field?
What is the minimum energy density in say GeV/m3 ? Dark Energy has an energy density of about 0.5 GeV/m3 .
The max is determined by the maximum density measured for dark matter which is about 0.5 GeV/cm3 (note the centimeter scale used by astronomers when dealing with matter clouds) see my earlier related post Is Dark Matter merely Inactive Matter? so about 1003 or a million times the density of dark energy.
Thus the S field saturates at a density of ~ 5e5 GeV/m3 and is present all around us.
The S field density is then some function of the matter density. It turns out that it’s a cumulative effect from all matter enclosed inside ‘R’.
Stacy S. McGaugh and Federico Lelli
Consequently, the dark matter contribution is fully specified by that of the baryons. The observed scatter is small and largely dominated by observational uncertainties. This radial acceleration relation is tantamount to a natural law for rotating galaxy.
Start at Equation 22:
Then use and simplify to
Thus the quantity of DM inside a certain radius is wholly dependent on the amount of baryonic matter inside that radius. The S field is a cumulative effect of density of regular baryons.
When I use this on density I get
This equation states that the density of dark matter depends only on the enclosed average density of baryonic matter.
Calculating the acceleration at R , given a 35kPc distance R, and a density of baryons of 1GeV per cm3 gives = 1.2×10-10 m/s2 – ie which is the Milgrom acceleration. So that’s the cut where dark matter starts to be apparent, as the denominator starts to take off.
Of course, this density equation has limits on both ends. The dark matter S field has a maximum density of about a GeV per cm3 and once the density goes down to about dark energy levels one no longer calls it dark matter. Don’t forget my density version of the equation requires the average density inside R for the galaxy/gas cloud/cluster.
In my mind this S field is HFGW, but it’s not important what is the nature of the field, vs the mass of the field.
It seems to me that battling it out as MOND vs LCDM is perhaps not the best way to approach the problem as there are obviously more models around that might work. One just has to throw out some part of standard model physics!
Looking at the above more, I’m not convinced that sticking exactly to the MOND formula for mass and density is the way to go with this S field idea. I’m hoping there is a function where the density of DM is only given by the local density of matter, but perhaps that will not work. Perhaps there is something happening where DM depends on the total SUM of all all the matter interior to the radius R.
The Tully-Fisher relation (aka Baryonic TFR) is remarkable. As the diagram below shows, the relationship between Vf and the baryonic mass of galaxies is just too finely tuned to be caused by dark matter. Something is up. Vf is the stellar orbit velocity in the galactic halo. For more details see the paper by Lelli, McGaugh and Schombert .
MOND (MOdified Newtonian Dynamics) is one explanation for the Tully-Fisher relation. It posits that the force towards the centre of a galaxy at large distances is not simply that of Newton, but is modified with the a0 of 1.2×10-10m/s2 in all galaxies in addition to the usual force predicted by standard Newton or General Relativity’s gravity. LCDM Dark matter is a clumsy explanation for the BTFR, as it needs fine tuning for every galaxy (or every galaxy type) in order to make that straight line be so, well, straight.
There is another way to generate an inward acceleration.
We need a force on each nucleon that changes with how much matter is inside the radius where the particle is. It somehow ‘knows’ the gravitational potential at R, and has a force that depends on that!
What I have so far on this is something to do with dark energy being more concentrated in galaxy cores, so the particle feels this dark energy slope and responds to it.
NOTE: This needs to include a dependence on the enclosed M (ie enclosed Dark Energy inside R). I call this emission based acceleration ‘Anomalous radial nucleonic radiation’ (ARNR). MOND tells us that particles in the galactic halo can ‘weigh’ the galaxy. They have that information. So there must/might be something like dark energy concentrated in the galaxy and the particles react to this, pushing radiation outward and reacting inward. Note in the galactic core the divergence of the DE is 0 – so no extra force on the particle in the middle of the galaxy.
It might seem strange to have this concerted outward radiation pattern though! Here are some possible explanations for an outward constant radiation by the halo constituents.
- I’m a fan of super high (nuclear and above) frequency gravitational wave (HFGW) emission/absorption in atoms and nuclei. So we might have some sort of stimulated emission from nucleons based on the outward flow of HFGW out of a galaxy.
- Dark Energy has a value of about 1GeV/m3 . If this energy is concentrated by the galactic core, then maybe some the nucleon has a force toward the centre of the galaxy in response to the divergence of the DE field. (i.e. lower radiation resistance in the outward direction). This Dark Energy may be some new field, (or HFGW).
- Some other mechanism. We don’t have to know the mechanism to predict some consequences.
People don’t generally like the MOND theories because general relativity (GR) in its usual form is so well tested and accurate. LCDM is disliked by many because of the fine-tuning required in order to get everything to match observations. ‘Anomalous radial nucleonic radiation’ (ARNR) allows GR to exist as is.
Consequences of ARNR
If nuclei really do radiate continuously, (perhaps in violation of quantum – mechanics) then there will be experimental consequences. These consequences may be largely hidden from earth-based experiments, as the emission would be isotropic and take place in some field (such as gravitational waves) that is hard to detect with current instruments.
There may be other places where cosmological or galactic cluster observations might note this energy output.
In other posts, I have wondered if dark matter is ‘sleeping regular matter‘ and I still think that it may be a viable option, but it seems like any explanation in terms of dark matter may need to be fine-tuned to match observations.
I’m headed to London for the EmQM 2017 conference Oct 26 – 28 2017, which will I am looking forward to.
I attended in 2015. The event has the byline – the 4th International Symposium about Quantum Mechanics based on a »Deeper Level Theory«. Its mission this year is
Towards Ontology of Quantum Mechanics and the Conscious Agent David Bohm Centennial Symposium
When I first really understood what quantum mechanics really was – in second-year undergrad at the University of Toronto, I immediately read all sorts of books and papers by and about Bohm’s theories. He made quite a change in my outlook of physics in general. I became convinced in 1985 that quantum mechanics was incomplete and that something along the lines of Bohm’s theory was the way to go. That makes the conference more special for me, and I’m sure many other attendees share the same view.
I am presenting a poster which I’m still polishing that up right now (the abstract at least was well received!). Its based on a paper called ‘Fully Classical Quantum Gravity (see link)‘. I have renamed the poster to Stochastic Gravity and Ontological Quantum Mechanics and rewritten most of it.
The poster describes the results of a paper by Vinante et al. :
Improved noninterferometric test of collapse models using ultracold cantilevers . If the results hold up, they are quite breathtaking as they state:
The finite intercept, clearly visible in the inset of Fig. 3 implies that the data are not compatible with a pure thermal noise behavior, and a nonthermal ex-cess noise is present.
The paper details the careful procedures followed to chase down possible experimental problems. The analysis is carefully thought out. The paper claims the results show a possible signature of Adler’s Continuous Spontaneous Localization (CSL), but to me it seems like if the results hold up that its simply a great puzzle to solve! My take (in line with the ‘Fully Classical Quantum Gravity‘ paper) is that this noise is caused by the continuous emission and/or absorption of gravitational waves at nuclear frequencies.
Gravitational waves are notoriously hard to see, and these high-frequency ones (HFGWs) even more so. Indeed, since gravitational wave power goes with the square of frequency, truly tiny values of the gravitational wave strain ‘h’ (h == 0 in flat space and h < 1) make for large energy fluxes. The LIGO observations saw gravitational waves with . The formula for the flux of a gravitational wave is:
So LIGO can see gravitational waves with a flux of about , while at nuclear frequencies like , the same formula yields an incredible – another way to look at that flux is that it represents 400+ kg! of mass per square meter per second! I propose that results like this suggest that matter itself can be made of nothing but elaborate patterns of gravitational structures. Clearly, high-frequency gravitational structures can hold an incredible amount of energy.
Another way of thinking about this result is that anytime a better telescope is built, or one is built that looks at a new wavelength, field or pattern of signals, those signals are not only discovered, they produce deep new insights about our universe. The fact that HFGWs are hard to detect does not mean that they are not there! Indeed, instead of calculating what the flux of HFGWs might be around us, we should instead admit our ignorance and calculate what we don’t know. Huge amounts of gravitational wave energy could be whipping by everything right now and we would not know a thing about it.
It’s going to be a quick few days in London!
So Leonard Susskind publishes a paper on arXiv
Which of course is what I have been saying all along. Of course Susskind’s paper is actually ‘of course’ not about QM emerging from GR, which is what I believe, and have good reason to follow up on.
Instead Susskind says:
Dear Qubitzers, GR=QM? Well why not? Some of us already accept ER=EPR , so why not follow it to its logical conclusion? It is said that general relativity and quantum mechanics are separate subjects that don’t fit together comfortably. There is a tension, even a contradiction between them—or so one often hears. I take exception to this view. I think that exactly the opposite is true. It may be too strong to say that gravity and quantum mechanics are exactly the same thing, but those of us who are paying attention, may already sense that the two are inseparable, and that neither makes sense without the other.
The ‘paper’ (perhaps letter is a better name), has made the rounds/ Not Even Wrong,
Instead of that happening, it seems that the field is moving ever forward in a post-modern direction I can’t follow. Tonight the arXiv has something new from Susskind about this, where he argues that one should go beyond “ER=EPR”, to “GR=QM”. While the 2013 paper had very few equations, this one has none at all, and is actually written in the form not of a scientific paper, but of a letter to fellow “Qubitzers”. On some sort of spectrum of precision of statements, with Bourbaki near one end, this paper is way at the other end.
While Woit’s nemesis Lubos Motl,
Susskind also says lots of his usual wrong statements resulting from a deep misunderstanding of quantum mechanics – e.g. that "quantum mechanics is the same as a classical simulation of it". A classical system, a simulation or otherwise, can never be equivalent to a quantum mechanical theory. The former really doesn't obey the uncertainty principle, allows objective facts; the latter requires an observer and is a framework to calculate probabilities of statements that are only meaningful relatively to a chosen observer's observations.
Sabine Hossenfelder put it visually on Twitter:
My take is about the same as these popular bloggers. Don’t really think much of it.
Except the title. QM can, I believe, emerge from Einstein’s General Relativity, in much the same way that Bush and Couder’s bouncing drops can display quantum behaviour.
Its ridiculous that 11 dimensions and sparticles have hundreds of times more study than fundamental emergent phenomena. Emergence is the way to go forward. You don’t need a new force/particle/dimension/brane to make fundamentally new physics from what we already have in electromagnetism and general relativity.
See the search links on the side of this blog for some recent papers in these areas.
As someone pointed out on reddit, it looks like an inelastic collision.
Singularities, de Broglie and emergent quantum mechanics comes to mind for me.
The interaction causes a wave to propagate. After a time equal to the period of a wave on the ring, it separates into two.
The Atomic World Spooky? It Ain’t Necessarily So!: Emergent Quantum Mechanics, How the Classical Laws of Nature Can Conspire to Cause Quantum-Like Behaviour
The hardcover is out – for example here: Amazon.com or at Springer –
but its coming out in paperback soon – Amazon.ca . Its not coming in paperback, so I just bought the hard cover. Its ok if a paperback comes later but I can’t wait! So what I’m saying is that I’m cheap enough to wait for the paperback, so I actually have not read the book, but it looks like its going to be a real addition to the field. Its aimed at people with at least a science background.
The book takes the discovery (by for example Couder/Bush) that quantum-like behaviour is not solely reserved to atomic particles one step further. If electrons are modelled as vibrating droplets instead of the usually assumed point objects, and if the classical laws of nature are applied, then exactly the same behaviour as in quantum theory is found, quantitatively correct! The world of atoms is strange and quantum mechanics, the theory of this world, is almost magic. Or is it? Tiny droplets of oil bouncing round on a fluid surface can also mimic the world of quantum mechanics. For the layman – for whom the main part of this book is written – this is good news. If the everyday laws of nature can conspire to show up quantum-like phenomena, there is hope to form mental pictures how the atomic world works.
To begin with a warning: the contents of this book may be controversial. The readers the author had in mind when writing this book are interested laymen, typically the kind of reader who searches bookshops for the latest popular-scientific books on developments in cosmology, on recently found fun- damental particles, or on the ever more magical findings of quantum physics. These readers presumably have some background of classical school physics (although most of it may have been forgotten). It is the kind of reader who does not like to be bothered with formulae or is even allergic to them, but who has the interest and tenacity to read sentences twice if necessary. But complete novices in the matters of the atomic world should be warned: the stories told in this book are not the same as usually found in books about quantum phenomena. This book does not give the conventional explanations. In order to read the usual stories, it is better to start in one of the many other popular-scientific books. What then is this book about? This book certainly does not pretend to contain a new theory of quantum mechanics, nor does it have the intention. Quantum theory in its present form is an almost perfect tool to calculate the behaviour of elementary particles. But the theory is “strange”, it is not something that intuitively can be understood. What this book tries to add are visualisations or mental pictures, closer to the intuition, because they are based on classical physics. However, the mental pictures in this book are not just half-baked analogies or metaphores, they are solidly founded on a large body of mathematical theory (for the diehards: the theory can be found in the appendix). This aspect makes this book different from other popular-scientific books.
I have been reading up on the trans-Planckian problem with the black hole evaporation process. (See the end for an update in March 2018)
Here is the problem.
An observer far away from a black hole sees photons of normal infared or radio wave energies coming from a black hole (i.e. << 1eV). If one calculates the energies that these photons should have once they are in the vicinity of the black hole horizon, the energy is becomes high – higher than the Planck energy, exponentially so. Of course if we ride with the photon down to the horizon, the photon blue shifts like mad, going ‘trans-Planckian’ – i.e. having more energy than the Planck energy.
Looked at another way: if a photon starts out at the horizon, then we won’t ever see it as a distant observer. So it needs to start out just above the horizon where the distance from the horizon is given by the Heisenberg uncertainty principle, and propagate to us. The problem is that the energy of these evaporating photons must be enormous at this quantum distance from the horizon – not merely enormous, but exponentially enormous. A proper analysis actually starts the photon off in the formation of the black hole, but the physics is the same.
Adam Helfer puts it well in his paper. Great clear writing and thinking.
Trans–Planckian modes, back–reaction, and the Hawking process
My take is simple. After reading Hefler’s paper plus others on the subject, I’m fairly convinced that black holes of astrophysical size (or even down to trillions of tons) do not evaporate.
The math is good. The physics isn’t
Let’s get things straight here: the math behind Hawking evaporation is good: Hawking’s math for black hole evaporation is not in question.
It should be emphasized that the problems uncovered here are entirely physical, not mathematical. While there are some technical mathematical concerns with details of Hawking’s computation, we do not anticipate any real difficulty in resolving these (cf. Fredenhagen and Haag 1990). The issues are whether the physical assumptions underlying the mathematics are correct, and whether the correct physical lessons are being drawn from the calculations.
Yet Hawking’s prediction of black hole evaporation is one of the great predictions of late 20th century physics.
Whether black holes turn out to radiate or not, it would be hard to overstate the significance of these papers. Hawking had found one of those key physical systems which at once bring vexing foundational issues to a point, are accessible to analytic techniques, and suggest deep connections between disparate areas of physics. (Helfer, A. D. (2003). Do black holes radiate? Retrieved from https://arxiv.org/pdf/gr-qc/0304042.pdf)
So it’s an important concept. In fact it so important that much of not only black hole physics but quantum gravity and cosmology all use or even depend on black hole evaporation. Papers with titles like “Avoiding the Trans-Planckian Problem in Black Hole Physics” abound.
The trans-Planckian problem is indicative of the state of modern physics.
There are so many theories in physics today that rely on an unreasonable extrapolation of the efficacy of quantum mechanics at energies and scales that are not merely larger than experimental data, but exponentially larger than we have experimental evidence for. Its like that old joke about putting a dollar into a bank account and waiting a million years – even at a few per cent interest your money will be worth more than the planet. A straightforward look at history shows that currency and banks live for hundreds of years – not millions. The same thing happens in physics – you can’t connect two reasonable physical states through an unphysical one and expect it to work.
The trans-Planckian problem is replicated perfectly in inflationary big bang theory.
The trans-Planckian problem seems like a circle the wagons type of situation in physics. Black hole evaporation now has too many careers built on it to be easily torn down.
To emphasize the essential way these high–frequency modes enter, suppose we had initially imposed an ultraviolet cut–off Λ on the in–modes. Then we should have found no Hawking quanta at late times, for the out–modes’ maximum frequency would be ∼ v′(u)Λ, which goes to zero rapidly. (It is worth pointing out that this procedure is within what may be fairly described as text–book quantum field theory: start with a cut–off, do the calculation, and at the very end take the cut–off to infinity. That this results in no Hawking quanta emphasizes the delicacy of the issues. In this sense, the trans–Planckian problem may be thought of as a renormalization–ambiguity problem.)
Some may argue that other researchers have solved the trans-Planckian problem, but its just too simple a problem to get around.
One way around it – which I assume is what many researchers think – is that quantum mechanics is somehow different than every other physical theory ever found, in that it has no UV, IR, no limits at all. In my view that is extremely unlikely. Quantum mechanics has limits, like every other theory.
Possible limits of quantum mechanics:
- Zero point: Perhaps there is a UV cut – ( Λ ) . The quantum vacuum cannot create particles of arbitrarily large energies.
- Instant collapse. While its an experimental fact that QM has non-local connections, the actual speed of these connections is only tested to a few times the speed of light.
- Quantum measurement – Schrödinger’s cat is as Schrödinger initially intended it to be seen – as an illustration of the absurdity of QM in macroscopic systems.
If there is a limit on quantum mechanics – that QM is like any other theory – a tool that works very well in some domain of physical problems, then many many pillars of theoretical physics will have to tumble, black hole evaporation being one of them.
The other argument – Unruh saves evaporation?
March 2018 update: Ok – upon reading this paper by Steven B. Giddings
Where does Hawking radiation originate? A common picture is that it arises from excitations very near or at the horizon, and this viewpoint has supported the “firewall” argument and arguments for a key role for the UV-dependent entanglement entropy in describing the quantum mechanics of black holes. However, closer investigation of both the total emission rate and the stress tensor of Hawking radiation supports the statement that its source is a near-horizon quantum region, or “atmosphere,” whose radial extent is set by the horizon radius scale.
So after I wrote this I am not convinced that holes don’t radiate.
Adam’s argument is below. Basically in order for Unruh’s/Giddings ‘saving’ of black hole radiation to work, there has to be enough ‘source space’ around the black hole to generate the Hawking radiation. There might be.
Qingdi Wang, Zhen Zhu, and William G. Unruh
It (I will call the paper WZU) has been discussed at several places:
Sabine Hossenfelder at the Backreaction blog,
So why talk about it more here?
Well because its an interesting paper, and I think that many of the most interesting bits have been ignored or misunderstood (I’m talking here about actual physicists not the popular press articles).
For instance here are two paragraphs from Sabine Hossenfelder
Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings.
So with the first paragraph, Sabine is talking about the a(t, x) factor in the metric (see equation 23 in the paper). I think that she could be a little more up front here: a(t, x) goes to zero alright, but only in very small regions of space for very short times (I’ll come back to that later). So in reality the average of the a(t,x) over any distance/time Planck scale or larger determines an almost flat, almost Lambda free universe -> average(a(t,x)) –> the a(t) as per a FLRW metric. I guess Sabine is worried about those instants when there are singularities in the solution. I agree with the answer to this supplied in the paper:
It is natural for a harmonic os- cillator to pass its equilibrium point a(t,x) = 0 at maximum speed without stopping. So in our solution, the singularity immediately disappears after it forms and the spacetime continues to evolve without stopping. Singularities just serve as the turning points at which the space switches. ...(technical argument which is not all that complicated)... In this sense, we argue that our spacetime with singularities due to the metric becoming degenerate (a = 0) is a legitimate solution of GR.
As I said, more on that below when we get to my take on this paper.
The second paragraph above from the Backreaction blog concerns the fact that the paper authors used semi classical gravity to derive this result.
The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study.
They are NOT using a classical gravity coupled to the expectation values of the quantum field theory. Indeed, according to WZU and the mathematics of the paper they say:
In this paper, we are not trying to quantize gravity. Instead, we are still keeping the spacetime metric a(t, x) as classical, but quantizing the fields propagating on it. The key difference from the usual semiclassical gravity is that we go one more step—instead of assuming the semiclassical Einstein equation, where the curvature of the spacetime is sourced by the expectation value of the quantum field stress energy tensor, we also take the huge fluctuations of the stress energy tensor into account. In our method, the sources of gravity are stochastic classical fields whose stochastic properties are determined by their quantum fluctuations.
So I think that she has it wrong. In her reply to my comment on here blog she states that its still semiclassical gravity as they use the expectation values of the fluctuations (they don’t as you can see by the quote above or better by looking at the paper. See how the equation 29 talks about expectation values, but the actual solution does not use them ). She concludes her comment: “Either way you put it, gravity isn’t quantized.” I think that’s also fair appraisal of the attitude of many people on reading this paper many people don’t like it because gravity is treated classically.
Why I think the paper is interesting.
Gravity is not quantized: get over it
I think its interesting as their approach to connecting gravity to the quantum world is basically identical to my Fully Classical Quantum Gravity experimental proposal – namely that gravity is not quantized at all and that gravity couples directly to the sub-quantum fluctuations. Wang and co-authors apologize for the lack of a quantum theory of gravity, but that appears to me anyway as more of a consensus-towing statement than physics. Indeed, the way its shoved in at the start of section C seems like it is an afterthought.
(Gravitational) Singularities are no big deal
Singularities are predicted by many or (even all?) field theories in physics. In QED the technique of renormalization works to remove singularities (which are the same as infinities). In the rest of modern QFT singularities are only perhaps removed by renormalization. In other words quantum field theory blows up all by itself, without any help from other theories. Its naturally bad.
The Einstein equations have a different behaviour under singular conditions. They are completely well behaved. Its only when other fields are brought in, such as electromagnetism or quantum field theory that trouble starts. But all on their own singularities are no big deal in gravity.
So I don’t worry about the microscopic, extremely short lived singularities in WZU at all.
Why it’s exciting
We have WZU metric equation 23
ds2 = −dt2 +a2(t,x)(dx2 +dy2 +dz2)
a(t,x) oscillates THROUGH zero to negative, but the metric depend on a^2, so we have a positive definite metric that has some zeros. These zeros are spread out quasi periodically in space and time. If one takes two points on the manifold (Alice and Bob denoted A & B), then the distance between A and B will be equivalent to the flat space measure (I am not looking at the A and B being cosmic scale distances apart in time or space, so its almost Minkowski). Thus imagine A and B being 1 thousand km apart. The scale factor a(t, x) averages to 1.
Here is the exciting bit. While an arbitrary line (or the average of an ensemble of routes) from A -> B is measured as a thousand km, there are shorter routes through the metric. Much shorter routes. How short? Perhaps arbitrarily short. It may be that there is a vanishingly small set of paths with length ds = 0, and some number of paths with ds just greater than 0, all the way up to ‘slow paths’ that spend more time in a > 1 areas.
Imagine a thread like singularity (like a cosmic string – or better a singularity not unlike a Kerr singularity where a >> m). In general relativity such a thread is of thickness 0, and the ergo region around it also tends to zero volume. One calculation of the tension on such a gravitational singularity ‘thread’ (I use the term thread as to not get confused with string theory) come out to a value of about 0.1 Newtons. A Newton of tension on something so thin is incredible. Such a thread immersed in the WZU background will find shorter paths – paths that spend more time in areas where a << 1, these paths being much more energetically favoured. There are also very interesting effects when such gravitational thread singularities are dragged through the WZU background. I think that this might be the mechanism that creates enough action to generate electromagnetism from pure general relativity only.
So these thread singularities thread their way through the frothy WZU metric and as such the distance a single such thread may measure between Alice and Bob may be far far less than the flat space equivalent.
It seems to me that one could integrate the metric as given in WZU equation 23 with a shortest path condition and come up with something. Here is one possible numerical way: start out with a straight thread from A to B. Then relax the straight line constraint, assign a tension to the thread, and see what the length of the thread after a few thousand iterations, where at each iteration, each segment allows itself to move toward a lower energy state (i.e. thread contraction).
This opens up:
Realist, local quantum mechanics is usually thought of requiring on having some dependency on non-local connections, as quantum experiments have shown. This shortcut path may be an answer to the need for non-local connections between particles, i.e. a mechanism for entaglement, a mechanism for Einstein’s “spooky action at a distance”.
Faster than light communication.
Its always fun to see if there are realistic methods where one might beat the speed limit on light. It seems that worm hole traversal has been one of the favourites to date. I think that the WZU paper points at another mechanism – the fact that there exist shorter paths through the sub-quantum general relativistic froth of WZU. How might one construct a radio to do this? Entangled particles, particles that follow the zeros of a(t, x) preferentially, etc etc. One could imagine a brute force method to test this where huge pulses of energy are transmitted through space at random intervals. Perhaps a precursor signal could be measured at the detector, where some of the energy takes a short path through the WZU metric.
- “…Nevertheless, due to the inner-atomic movement of electrons, atoms would have to radiate not only electro-magnetic but also gravitational energy, if only in tiny amounts. As this is hardly true in Nature, it appears that quantum theory would have to modify not only Maxwellian electrodynamics, but also the new theory of gravitation.” – Einstein, 1916
Here is the paper…