An observer far away from a black hole sees photons of normal infared or radio wave energies coming from a black hole (i.e. << 1eV). If one calculates the energies that these photons should have once they are in the vicinity of the black hole horizon, the energy is becomes high – higher than the Planck energy, exponentially so. Of course if we ride with the photon down to the horizon, the photon blue shifts like mad, going ‘trans-Planckian’ – i.e. having more energy than the Planck energy.

Looked at another way: if a photon starts out *at* the horizon, then we won’t ever see it as a distant observer. So it needs to start out just above the horizon where the distance from the horizon is given by the Heisenberg uncertainty principle, and propagate to us. The problem is that the energy of these evaporating photons must be enormous at this quantum distance from the horizon – not merely enormous, but exponentially enormous. A proper analysis actually starts the photon off in the formation of the black hole, but the physics is the same.

Adam Helfer puts it well in his paper. Great clear writing and thinking.

My take is simple. After reading Hefler’s paper plus others on the subject, I’m fairly convinced that black holes of astrophysical size (or even down to trillions of tons) do not evaporate.

Lets get things straight here: the math behind Hawking evaporation is good: Hawking’s math for black hole evaporation is not in question.

It should be emphasized that the problems uncovered here are entirely physical, not mathematical. While there are some technical mathematical concerns with details of Hawking’s computation, we do not anticipate any real difficulty in resolving these (cf. Fredenhagen and Haag 1990). The issues are whether the physical assumptions underlying the mathematics are correct, and whether the correct physical lessons are being drawn from the calculations.

Yet Hawking’s prediction of black hole evaporation is one of the great predictions of late 20th century physics.

Whether black holes turn out to radiate or not, it would be hard to overstate the significance of these papers. Hawking had found one of those key physical systems which at once bring vexing foundational issues to a point, are accessible to analytic techniques, and suggest deep connections between disparate areas of physics. (Helfer, A. D. (2003). Do black holes radiate? Retrieved from https://arxiv.org/pdf/gr-qc/0304042.pdf)

So its an important concept. In fact it *so* important that much of not only black hole physics but quantum gravity and cosmology all use or even *depend* on black hole evaporation. Papers with titles like “Avoiding the Trans-Planckian Problem in Black Hole Physics” abound.

There are so many theories in physics today that rely on an unreasonable extrapolation of the efficacy of quantum mechanics at energies and scales that are not merely larger than experimental data, but exponentially larger than we have experimental evidence for. Its like that old joke about putting a dollar into a bank account and waiting a million years – even at a few per cent interest your money will be worth more than the planet. A straightforward look at history shows that currency and banks live for hundreds of years – not millions. The same thing happens in physics – you can’t connect two reasonable physical states through an unphysical one and expect it to work.

The trans-Planckian problem is replicated perfectly in inflationary big bang theory.

The trans-Planckian problem seems like a circle the wagons type of situation in physics. Black hole evaporation now has too many careers built on it to be easily torn down.

**Torn down:**

To emphasize the essential way these high–frequency modes enter, suppose we had initially imposed an ultraviolet cut–off Λ on the in–modes. Then we should have found no Hawking quanta at late times, for the out–modes’ maximum frequency would be ∼ v′(u)Λ, which goes to zero rapidly. (It is worth pointing out that this procedure is within what may be fairly described as text–book quantum field theory: start with a cut–off, do the calculation, and at the very end take the cut–off to infinity. That this results in no Hawking quanta emphasizes the delicacy of the issues. In this sense, the trans–Planckian problem may be thought of as a renormalization–ambiguity problem.)

Some may argue that other researchers have solved the trans-Planckian problem, but its just too simple a problem to get around.

One way around it – which I assume is what many researchers think – is that quantum mechanics is somehow different than every other physical theory ever found, in that it has no UV, IR, no limits at all. In my view that is extremely unlikely. Quantum mechanics has limits, like every other theory.

- Zero point: Perhaps there is a UV cut – ( Λ ) . The quantum vacuum cannot create particles of arbitrarily large energies.
- Instant collapse. While its an experimental fact that QM has non-local connections, the actual speed of these connections is only tested to a few times the speed of light.
- Quantum measurement – Schrödinger’s cat is as Schrödinger initially intended it to be seen – as an illustration of the absurdity of QM in macroscopic systems.

If there is a limit on quantum mechanics – that QM is like any other theory – a tool that works very well in some domain of physical problems, then many many pillars of theoretical physics will have to tumble, black hole evaporation being one of them.

]]>

It (I will call the paper WZU) has been discussed at several places:

Sabine Hossenfelder at the Backreaction blog,

Reddit ,

So why talk about it more here?

Well because its an interesting paper, and I think that many of the most interesting bits have been ignored or misunderstood (I’m talking here about actual physicists not the popular press articles).

For instance here are two paragraphs from Sabine Hossenfelder

Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings.

So with the first paragraph, Sabine is talking about the a(t, **x**) factor in the metric (see equation 23 in the paper). I think that she could be a little more up front here: a(t, **x**) goes to zero alright, but only in very small regions of space for very short times (I’ll come back to that later). So in reality the average of the a(t,x) over any distance/time Planck scale or larger determines an almost flat, almost Lambda free universe -> average(a(t,x)) –> the a(t) as per a FLRW metric. I guess Sabine is worried about those instants when there are singularities in the solution. I agree with the answer to this supplied in the paper:

It is natural for a harmonic os- cillator to pass its equilibrium point a(t,x) = 0 at maximum speed without stopping. So in our solution, the singularity immediately disappears after it forms and the spacetime continues to evolve without stopping. Singularities just serve as the turning points at which the space switches. ...(technical argument which is not all that complicated)... In this sense, we argue that our spacetime with singularities due to the metric becoming degenerate (a = 0) is a legitimate solution of GR.

As I said, more on that below when we get to my take on this paper.

The second paragraph above from the Backreaction blog concerns the fact that the paper authors used semi classical gravity to derive this result.

The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study.

They are NOT using a classical gravity coupled to the expectation values of the quantum field theory. Indeed, according to WZU and the mathematics of the paper they say:

So I think that she has it wrong. In her reply to my comment on here blog she states that its still semiclassical gravity as they use the expectation values of the fluctuations (they don’t as you can see by the quote above or better by looking at the paper. See how the equation 29 talks about expectation values, but the actual solution does not use them ). She concludes her comment: “Either way you put it, gravity isn’t quantized.” I think that’s also fair appraisal of the attitude of many people on reading this paper many people don’t like it because gravity is treated classically.

I think its interesting as their approach to connecting gravity to the quantum world is basically identical to my Fully Classical Quantum Gravity experimental proposal – namely that *gravity is not quantized at all and that gravity couples directly to the sub-quantum fluctuations*. Wang and co-authors apologize for the lack of a quantum theory of gravity, but that appears to me anyway as more of a consensus-towing statement than physics. Indeed, the way its shoved in at the start of section C seems like it is an afterthought.

Singularities are predicted by many or (even all?) field theories in physics. In QED the technique of renormalization works to remove singularities (which are the same as infinities). In the rest of modern QFT singularities are only perhaps removed by renormalization. In other words quantum field theory blows up all by itself, without any help from other theories. Its naturally bad.

The Einstein equations have a different behaviour under singular conditions. They are completely well behaved. Its only when other fields are brought in, such as electromagnetism or quantum field theory that trouble starts. But all on their own singularities are no big deal in gravity.

So I don’t worry about the microscopic, extremely short lived singularities in WZU at all.

We have WZU metric equation 23

ds2 = −dt2 +a2(t,x)(dx2 +dy2 +dz2)

a(t,x) oscillates THROUGH zero to negative, but the metric depend on a^2, so we have a positive definite metric that has some zeros. These zeros are spread out quasi periodically in space and time. If one takes two points on the manifold (Alice and Bob denoted A & B), then the distance between A and B will be equivalent to the flat space measure (I am not looking at the A and B being cosmic scale distances apart in time or space, so its almost Minkowski). Thus imagine A and B being 1 thousand km apart. The scale factor a(t, x) averages to 1.

Here is the exciting bit. While an arbitrary line (or the average of an ensemble of routes) from A -> B is measured as a thousand km, there are shorter routes through the metric. Much shorter routes. How short? Perhaps arbitrarily short. It may be that there is a vanishingly small set of paths with length ds = 0, and some number of paths with ds just greater than 0, all the way up to ‘slow paths’ that spend more time in a > 1 areas.

Imagine a thread like singularity (like a cosmic string – or better a singularity not unlike a Kerr singularity where a >> m). In general relativity such a thread is of thickness 0, and the ergo region around it also tends to zero volume. One calculation of the tension on such a gravitational singularity ‘thread’ (I use the term thread as to not get confused with string theory) come out to a value of about 0.1 Newtons. A Newton of tension on something so thin is incredible. Such a thread immersed in the WZU background will find shorter paths – paths that spend more time in areas where a << 1, these paths being much more energetically favoured. There are also very interesting effects when such gravitational thread singularities are dragged through the WZU background. I think that this might be the mechanism that creates enough action to generate electromagnetism from pure general relativity only.

So these thread singularities thread their way through the frothy WZU metric and as such the distance a single such thread may measure between Alice and Bob may be far far less than the flat space equivalent.

It seems to me that one could integrate the metric as given in WZU equation 23 with a shortest path condition and come up with something. Here is one possible numerical way: start out with a straight thread from A to B. Then relax the straight line constraint, assign a tension to the thread, and see what the length of the thread after a few thousand iterations, where at each iteration, each segment allows itself to move toward a lower energy state (i.e. thread contraction).

This opens up:

Realist, local quantum mechanics is usually thought of requiring on having some dependency on non-local connections, as quantum experiments have shown. This shortcut path may be an answer to the need for non-local connections between particles, i.e. a mechanism for entaglement, a mechanism for Einstein’s “spooky action at a distance”.

Its always fun to see if there are realistic methods where one might beat the speed limit on light. It seems that worm hole traversal has been one of the favourites to date. I think that the WZU paper points at another mechanism – the fact that there exist shorter paths through the sub-quantum general relativistic froth of WZU. How might one construct a radio to do this? Entangled particles, particles that follow the zeros of a(t, x) preferentially, etc etc. One could imagine a brute force method to test this where huge pulses of energy are transmitted through space at random intervals. Perhaps a precursor signal could be measured at the detector, where some of the energy takes a short path through the WZU metric.

]]>

But there’s another view — one that’s been around for almost a century — in which particles really do have precise positions at all times. This alternative view, known as pilot-wave theory or Bohmian mechanics,

## New Support for Alternative Quantum View

An experiment claims to have invalidated a decades-old criticism against pilot-wave theory, an alternative formulation of quantum mechanics that avoids the most baffling features of the subatomic universe.

]]>

- “…Nevertheless, due to the inner-atomic movement of electrons, atoms would have to radiate not only electro-magnetic but also gravitational energy, if only in tiny amounts. As this is hardly true in Nature, it appears that quantum theory would have to modify not only Maxwellian electrodynamics, but also the new theory of gravitation.” –
*Einstein, 1916*

Einstein it would seem was wrong on the gravtitational side of this.

Working Paper Fully Classical Quantum Gravity

The paper looks at possible ways to see these tiny emissions (nuclear scale emissions are higher) and thus lays out a quantum gravity experiment achievable with today’s technology.

Here is the paper…

Also see these references…

Article Emergent Quantum Mechanics

]]>

So this memory effect combined with energy absorption and re-radiation IS QM.

Kerr ring has two frequency bands. EM band is high frequency exchange in the linear region of the singularity line, while compton – deBroglie frequency is QM.

]]>

Details on signal processing can be found here.

]]>

In this two page paper, I look at how the relationship between the dimensions of a Kerr singularity and the strength of the electric Coulomb effect compare. The size (or rather ratio) of the Kerr ring singularity is exactly equal to the ratio of the electric to gravitational force between two electrons! Thus a number which is thought to be of electromagnetic origin can be determined by general relativity only.

Its a PDF: (also posted at ResearchGate)

]]>

The story was recently highlighted in the press:

Astronomers using the Hubble Space Telescope have spotted a supermassive black hole that has been propelled out of the centre of the galaxy where it formed. They reckon the huge object was created when two galaxies merged and was then ejected by gravitational waves. The discovery centres on galaxy 3C186, which lies about eight billion light-years from Earth and contains an extremely bright object that astronomers believe is a black hole weighing about one billion Suns. Most large galaxies, including our own Milky Way, contain such supermassive black holes at their cores, with these huge, bright objects being powered by radiation given off by matter as it accelerates into the black hole.

See – Supermassive black hole was ejected by gravitational waves, Hubble detects supermassive black hole kicked out of galactic core etc.

It looks like the ejected hole was quite efficiently ‘tractor beamed’ to its ejection velocity by the gravitational wave emission.

The calculations are quite simple here, at least to an first approximation. There is a black hole formed of total mass 3 billion solar masses (using the arXiv paper as a source for all calculations). Since a solar mass black hole has a Schwarzschild radius of 3 km, that makes for a object diameter of about 18 billion km, which is also of order of the wavelength of the waves involved in a gravitational merger.

The merger time when 80% of the energy is released is roughly 100 M for two holes of mass M merging, we have M = 1.5e9 solar masses, so the light travel time is about 1.5e9*3km/3e8meters/sec or 16,000 seconds is M in this case. 100 M is the time where all the energy comes out – AKA the chirp.

So about 1,600,000 seconds is the relevant time. (For GW150914 that LIGO saw the same time would be 0.03 seconds – the holes were only 30 solar masses).

A total interaction time of 20 days. So the black hole is accelerated to a speed of 2000km/sec over 1,600,000 seconds. Thats an acceleration of 1 m/sec^2, or about 1/10 of earths gravity – funny how the numbers work out to be an acceleration that is an understandable number. The force is huge: F = ma or 1 x10^40 newtons. The total kinetic energy is KE = 1/2 (3e9 solar masses)*(2000km/s)^2, 1.2×10^52 J.

From a conservation of momentum we can get the total momentum of the gw E/c = (3e9 solar masses)*(2000km/s) –> 10^54 J of gw energy, this much energy was in a region about 18 billion km wide, say 1,600,000 seconds long, so an average of 1e13 J/metre^3, with a peak likely 5x that. We have an h for that from a typical expression for energy in a gravitational wave: so h = sqrt(32*pi*G*tGW/(w**2c**2)).

Wolfram shows h as 0.8 for these values (h can not be bigger than 1, anything over 0.1 means you need to use full non linear to get accurate results). In other words the math points to some sort of maximal connection – the gravitational waves must have been very connected to the structure. Gravitational waves while only weakly connected to something like LIGO are very strongly connected – a high coupling constant – to areas with large curvature.

http://www.wolframalpha.com/input/?i=sqrt(32*pi*G*(1.7e13J%2F(metre%5E3))%2F((1e-6%2Fsecond)%5E2*c%5E2)

This is already known in the land of GR. My idea is that particles expose areas of very large curvature (naked singularities) and hence also couple extremely well to gravitational waves. Well enough that we can construct electromagnetism as an emergent phenomena of GR.

]]>

Ian Sample has a 38 min talk with Gerard t’Hooft about a paper he presented at EmQM2011 in Vienna. The EmQM conference is held every two years, in 2015 I presented a poster called Can a sub-quantum medium be provided by General Relativity?. He also chats with Kings College London’s Dr Eleanor Knox, for some historical perspective, and Professor Carlo Rovelli for a bit about the, relational interpretation of quantum mechanics.

Ian writes

The 20th century was a golden one for science. Big bang cosmology, the unravelling of the genetic code of life, and of course Einstein’s general theory of relativity. But it also saw the birth of quantum mechanics – a description of the world on a subatomic level – and unlike many of the other great achievements of the century, the weird world of quantum physics remains as mysterious today as it was a century ago. But what if strange quantum behaviour emerged from familiar, classical physics? How would this alter our view of the quantum world? And, more importantly, what would it tell us about the fundamental nature of reality?

Some notes while listening…

*1min* The Podcast starts off with Feynman’s guess snippet. Which is as funny as it is right.

*2min* That is followed by a very short well known (to quantum mechanics like us) intro to quantum mechanics.

*4min* Then – Ian actually uses the words ‘Emergent Quantum Mechanics’!

*5-7min* Gerard talks about the accuracy and weirdness of quantum mechanics.

*8min* Gerard – “Classical Physics is an approximation.” – not incompatible.

*8min* Ian brings out ‘God does not play dice’.

*9min* Knox – talks about the measurement problem. The collapse. The Copenhagen Interpretation.

*10min* Knox talks about emergent theories – like biology, thermodynamics. So is quantum mechanics emergent? – Will EmQM help with the measurement problem?

*13min* Gerard – perhaps the randomness of QM does arise from stochastic classical actions. The answer is no – its not classical – “its different to its bones” from classical. Its a fundamental difference. (i.e. Bell).

*15min* Gerard talks about the Standard Model of Particle Physics. – Lots of people think that is all we need.

*16min* Gerard says the SM+QM does not feel right. It lacks a certain internal logic. Gerard thinks that the laws of QM are something of an optical illusion, ‘what is it actually that we are describing’.

*17min* Gerard does not want to change the equations of QM. He keeps the equations of QM. (Tom says this is at odds with most EmQM practitioners today).

*18-22min* Ian asks if EmQM is controversial. Gerard says yes its controversial. Bell proves that its impossible to have a classical computer reproduce QM. But Gerard has looked at the small print, and finds a way around the Bell theorem – by long range correlations – linked. This correlation is the heart of QM and is not weird – but needs a natural explanation.

*22min* Ian asks if this solves ‘Spooky action at a distance’. Gerard says yes it does these correlations can explain these peculiar correlations.

*23min* Ian says Knox calls Gerards plan ‘superdeterminism’.

*25min *Ian asks why do we need to change QM if it works so well? Gerard says the positive outlook on QM as being exactly correct is the Many World Interpretation. Gerard finds MWI ‘unsatisfactory’.

26min Ian points out that Gerard and EmQM are controversial.

27min Ian talks to Carlo Rovelli.

28min Carlo says we need to get used to QM – it will not be explained or overturned soon. The weakness in EmQM’s are that they do not lead to ‘new ways of thinking’ (Tom says what??). Then he talks about String theory and QM. We should just accept it as is.

*30min* Ian talks to Gerard about being comfortable with a theory that like QM. Gerard says that the present situation is bad with the MWI multiverse. Gerard thinks that while this works its ‘unsatisfactory’.

*31min* Gerard – the MWI shows that we are not there yet. We have not found the right description for our universe. All we have today are templates – that is our description, but its not what it actually is.

*34min* Carlo – his relational theory. Which is not MWI. Take QM seriously, relational QM takes QM at face value. The properties of objects are always measured with respect to something else. Velocity is the property of an object relative to something else.

*36min* Carlo starts talking about quantum gravity. We need to use relational QM to help us get to quantum gravity.

37min Science is a long sequence of us discovering that we were wrong. The world is different. If we end up agreeing on QM then this changes realism and philosophy – which Carlo thinks that will be the case. QM is the final theory for him.

]]>

Proposed solution is that dark matter wakes up, turns into matter and then self repels/forms stars, etc.This means no cusp is found.

Note how in the most tenuous gas clouds (well cold ones – the hot tenuous galactic halo does not count as its a supernova effect), the density is the exact same as the dark matter density?

From https://arxiv.org/pdf/1404.1938.pdf – note how dark matter is about about 0.2 protons per cm^3 (BR 13 measurement) . One would think that in the disk of the milky way, this close to the galactic core that the DM density is about as large as it gets. Which seems right:

The Dark Matter Halo of the Milky Way, AD 2013 – https://arxiv.org/pdf/1304.5127v2.pdf

From wikipedia https://en.wikipedia.org/wiki/Interstellar_medium

Note how the lowest density clouds are 0.2 – 0.5 protons/cm^3

**Why is this the same density?** Answer: The dark matter has a maximum density, if density gets higher it lights up and turns into protons/electrons/H – which results in WIM and WNM clouds. **Dark matter might be sleeping matter.**

Journal, T. A. (2000). EVIDENCE FOR AN ADDITIONAL HEAT SOURCE IN THE WARM IONIZED MEDIUM OF GALAXIES, (Rand 1998), 1997–2000.

Dark Matter waking up might naturally result in WIM over WNM.

Also see https://gravityphysics.com/2013/10/20/how-to-make-dark-matter/

–Tom

]]>