I have written an app that detects cosmic rays on your iOS device. Its called Cosmic Ray App, its at http://cosmicrapapp.com and it actually seems to work! Get it at the app store.

Details on signal processing can be found here.

My opinions about physics in general.

I have written an app that detects cosmic rays on your iOS device. Its called Cosmic Ray App, its at http://cosmicrapapp.com and it actually seems to work! Get it at the app store.

Details on signal processing can be found here.

and try to find your friend at the other end.” — Leonard Susskind

In this talk Leonard Susskind gives a convincing argument as to why he thinks that ER == EPR , where ER denotes an Einstein – Rosen Bridge (aka wormhole) and EPR is the Einstein Podolsky Rosen paper (essentially entanglement).

Leonard draws three entangled pairs of particles on the chalkboard, (image its not merely 3 by 3e40) and then collapse the left and right down to black holes, then the entaglement must continue, and thus ER == EPR

A couple of months ago I read Jim Baggot’s Farwell to Reality. I was very impressed. I won’t go into details, but the book takes the eminently reasonable suggestion that 11 dimensions, uncountable infinities of universes and other mainstream theoretical physics subjects are “fairy tale physics”. Physics really needs people like Jim Baggot, Peter Woit, and Lee Smolin to show that the emperor has no clothes. But what if things are far worse than these authors report?

So I went looking for other writing critical of modern physics. Did I find it. I read two of Alexander Unzicker’s books. The Higgs Fake and Bankrupting Physics. They are a great read, whether you agree with him or not(caution – unintended hilarity). As if to underline the mindset of the physics community at large, after writing these two books Unzicker had trouble with arXiv, and has several more stories about the negative reaction of this closely knit society to outside criticism. One fact about criticism is that people get most upset when the criticism strikes close to the truth. Peter Woit’s criticism of Bankrupting Physics revolves around trying to classify Unzicker as ‘a garden-variety crank’ – which of course then makes Woit’s job easy as it automatically discounts everything he says (unless he is in agreement with Woit of course). My take is simpler: Woit’s book and blog regularly complains about string theory and the multiverse being bunk, which in my opinion is something like 99.9999% likely to be true, while Unzicker’s assertions are ‘only’ 10 – 99.9999% likely to be true. Contrast that with the 50,000 papers on supersymmetry – each one of which is a 100% waste of time according to both Woit and Unzicker. Peter Woit can be wrong too. There *are* other areas of physics that smell as bad as String Theory.

Physics is broken. Worse than we think.

The Ligo measurement is the greatest thing to happen in Physics and Astronomy for decades. Amazing work. It was about 50 years ago that the first gravitational wave detector was built by Weber. It took 50 years of refinement, many PhDs postdocs and full careers, but the LIGO team did. it.

I will assume that you have already read the paper and other popular sources on this observation, so I will jump into what excites me about this observation:

How much energy? Three solar masses worth of gravitational waves were emitted over just a few tenths of a second. The paper reports a peak gravitational energy emission of 200 solar masses per second! See the paper for errors on this estimate but its accurate to within 20%. The really amazing thing though is that this emission took place from a region only about 200 km across. The frequency of the waves at peak emission is (from the paper fig 1 – bottom row) 120 Hz or so.

Lets look at that amount of energy in terms of another form of energy that we are more comfortable with – electromagnetic waves – light. I want to compare this to the “Schwinger limit” – which is the maximum electromagnetic field that can occur before quantum pair creation effects take over. The Schwinger limit controls the maximum power that a region of space can transmit through itself (via opposing overlapping lasers say).

Say we had standing radio waves at 120Hz in a 200km on a side box, how much power could such an area radiate if it were only limited by the Schwinger limit? (i.e. ignore the mechanism by which such spectacular amounts of energy could be turned into radio waves).

The formula for energy density given an electric wave is quite simple: See for instance this hyper physics page:

Total Energy density = ε*E^{2} So at the Schwinger limit of 1.3×10^{18} V/m and with the constant ε being 8.854187817620… × 10^{-12} Farads/m, we get 1.5×10^{25} kg/m/s^{2}. We have 200,000 metres per side, so there are 1.2×10^{41} J (joules) in a 200km on a side box at the Schwinger limit.

How many joules of gravitational wave energy were held in a 200km box around GW150914? Well at 200 solar masses per second emitted, we need to take the size of the box and use light travel time to determine the amount of energy in the box at any one time: So 200 solar masses per second. Light travel time is 200km/(3e^{8}m/s) = 6.7×10^{-4} seconds. So if that volume emits 200 solar masses of energy per second, then that is 0.13 solar masses worth of energy at any one time in that volume, or 2.3×10^{46} Joules! This is some 5 orders of magnitude above what can be emitted by this same region using electromagnetic means!

The mechanism by which one arrives at the Schwinger limit is conceptually simple – ‘QED non linear photon – photon scattering’ involving electron – positron pair creation. (See the wikipedia article for a start).

Is there a corresponding quantum ‘Schwinger limit’ for gravitational waves (gravitons)? Well there is of course a limit in place due to classical general relativity, which is well known. In this case we are close (gravitational *h* is about 0.001 or so?) of the classical limit, which is basically that you can’t pile anything up so that the density would cause a black hole to form. But is there a feynman diagram for graviton – graviton scattering – well of course there is – it should behave like real classical gravity! I guess what I am wondering – is there another pathway where graviton scattering would take place and according to QM make the GW150914 ‘impossible’?

Does the observation of gravitational waves 5 orders of magnitude stronger than the strongest possible electromagnetic wave mean that we can finally stop calling gravity the weakest force? Yes to that!

My take as anyone who reads any of this site will know is that electromagnetism, quantum mechanics and the nuclear forces are all emergent phenomena from classical general relativity (see my poster). To me this observation is another hint at what general relativity can do.

As a further note, this corresponds to 0.018 watts per square metre at the 1.3 billion LY distance of the earth! That means that the earth had 2.3 Terawatts of gravitational energy passing through it on Sept 14 2015, just from this one event. Yet this massive amount of power is barely within observational limits of LIGO. LIGO sees only nice correlated bumps (with only 2 detectors its not really built to look at the background of gravitational wave energy), so we could easily have this much energy passing through the earth in the form of these stochastic low frequency gravitational waves all the time, and LIGO would not be able to detect it.

Gravitational waves make the perfect sub-quantum excitation – they can carry very large amounts of energy without anything but a carefully designed detector being able to pick them up.

Other than the actual LIGO observatory of course (which I argue below may not be the ideal gravitational wave detector).

A nice isolated black hole maximally spinning at near a = 1, and of the same approximate mass as the GW150914 emitter would exchange a substantial amount of the incoming wave energy into motion – and it would pick up something like 0.2 GW of power for a fraction of a second, which would likely be observable since this hypothetical black hole is sitting so nice and quiet, a GJ of energy exchange would cause small (since the thing is so heavy) but measurable effects.

Say we don’t have a nearby system (we would need varying sizes to couple to the frequencies we wish to monitor) of quiet black holes to listen to. What else could we build? The ideas opens up if one assumes that matter and light are both gravitational phenomena. What would be ideal is something that mimics a tuned superradiant like interaction with gravitational waves, but it trillions of times lighter and made of ‘ordinary matter’. What makes super radiance work?

“What happened is that because this Rydberg atom stayed very high excited, but up there the energy levels are very-very close together. What does that mean? The transitions have very long wavelengths. So basically every sample that you can have is very small compared to these long wavelengths. And so superradiance is actually quite likely in these cases. And this is actually exactly what happened. As I said, it was an accident, I don’t think it could have been done such an ideal experiment on purpose in this case.”

Weak or strong, the cosmic censorship conjecture states that naked singularities can’t be seen, otherwise everything will break down, it would be really bad and worst of all theorists would be confused.

But it turns out that singularities very likely don’t actually exist in a real universe governed by GR. Any lumpy, non symmetric space time can have all the spinning black holes it wants – at any angular momentum, even with **a > m** (angular momentum greater than the mass in suitable units), as the Kerr solution + bumps (bumps are incoming GR full bandwidth noise), will have no paths leading to any singularity! So the *curtain can be lifted*, the horizon is *not* needed to protect us.

In any sufficiently complex solution of GR, there exists no singularities. I am not talking about naked singularities here, I mean any and all singularities.

The complex nature of the interaction of GR at the tiny scales where the singularity would start to form stop that very formation. In other words, the singularity fails to form as the infalling energy always has some angular momentum in a random direction, and ruins the formation of a singularity.

In all likelihood actual physical spinning black holes in a turbulent environment (normal space) will have no singularity.

I will let Brandon Carter speak now:

“Thus we reach the conclusion that at timeline or null geodesic or orbit cannot reach the singularity under any circumstances except in the case where it is confined to the equator, cos() = 0…..Thus as symmetry is progressively reduced, starting from the Schwarchild solution, the extent of the class of geodesics reaching the singularity is steadily reduced likewise, … which suggests that after further reduction in symmetry, incomplete geodesics may cease to exist altogether”

Kerr Fields, Brandon Carter 1968.

Not cosmic censorship, but almost the opposite – singularities can’t exist in an GR universe (one with bumps) because there are no paths to them.

We have all been taught that singularities form quickly – that when a non – spherical mass is collapsing, GR quickly smooths the collapse, generating a singularity, neatly behind a horizon. Of course that notion is correct, but what it fails to take into account is that in a real situation, there is always more in falling energy, and that new infalling energy messes up the formation of the singularity.

While there may be solutions to Einstein’s equations that show a singularity (naked or not), these solutions are unphysical, in that the real universe is bumpy and lumpy. So while the equations hold ‘far’ away from the singularity, the detailed Gravity in the high curvature region keeps it just that – high curvature as opposed to a singularity.

The papers of A.Burinskii come to mind, e.g.:

Kerr Geometry as Space-Time Structure of the Dirac Electron

I am willing to bet that this conjecture is experimentally sound, in that there are no experiments that have been done to refute it. (that’s a joke I think).

On the theory side, one would have to prove that a singularity is stable against perturbation by incoming energy, which from my viewpoint seems unlikely, as the forming singularity would have diverging fields and diverging response to incoming energy, which would blow it apart. Like waves in the ocean that converge on a rocky point.

–Tom

I don’t divulge the recipe until later, lets start with the most undark matter we can find – CERNs protons.

CERN has proton – antiproton collisions going on at 7 TeV. There are collisions that generate up to a few TeV of photons.

Lets look at that from a viewpoint of classical physics, with some General Relativity added in the right place.

We have a few TeV of photons, these are generated in an extremely short period of time. We have two protons approaching and hitting (basically head on to get 2TeV of gammas). They are travelling at c. So that’s an interaction time of 2fm/3e8 m/s – 1.5 e-24 seconds.

So what happens gravitationally?

I have recently read a paper Monopole gravitational waves from relativistic fireballs driving gamma-ray bursts by Kutshera (http://arxiv.org/abs/astro-ph/0309448) that talks about this effect for, well exploding stars.

We have in a small area a mass of 7 TeV, of which about half leaves via gammas, the rest is in ‘slower’ particles like those higgs bosons, etc. This drop in mass results in a monopole gravitational wave. How big:

The force of Gravity is usually determined by the masses of the objects involved. But gravity is a local phenomenon (Einstein’s vision, not Newtons), and the field is actually a gradient of the potential.

So we have a potential change from 7 TeV to 5 TeV as seen by an observer near the collision as 2 TeV of gammas go whizzing by in a time span of 10-24 secs. Lets take the observer to be just outside the interaction area, say 10 fm away.

The gradient of the potential changes as the mass changes, which means its time dependent. We need the gradient.

Look at the Gravitational potential of the observer before and after the wave passes.

Before G(7 TeV)/10fm and after we have G(5 TeV)/10fm. So that’s an potential difference of G(2TeV)/10fm acting over a time of 1e-24 seconds, which means that we have a gradient of (some math. )SI units! Observer is a proton 10fm away,

I get 8.1×10-20 Watts – i.e. the observer proton sees its energy rise at a rate of 10-19 watts for 1e-24 seconds, it gets a boost in the away from the interaction, which raises its energy by a mere 5e-25eV.

Not much. But what I think is missing is that this sort of effect has to be looked at on a much smaller scale, and repeating, in that this monopole gravitational energy is coming in – then bouncing back out. The proton is thus an engine to this coherently at 1e40Hz or more, which makes other protons/electrons feel a force (they are bouncing this gravitational monopole radiation back and forth too) of the same size as the coulomb force. So this is the coloumb force. Electromagnetism as a phenomena of General Relativity. If you re-do the math with 10-47 or so seconds as the period then you start to see coulomb level forces at play. (Taking away accelerator energies ‘only’ adds a few zeros to the huge frequency requirement for mass exchange.)

The coloumb force rides above this – its a meta field ontop of this gravitationally built monopole system.

I think that electrons do this in a native, compact manner, likely using topology, while protons employ a complicated-ish ‘engine’ built of springs and struts made of GR that produce the same force as an electron. The strength of this force is determined by a feedback mechanism to balance that of the electrons.

Could dark matter be unlit(inactive/relaxed) protons? In other words protons that are not near an electron, and thus stop vibrating and being a charged particle. No near electron means no feedback means no charge. So perhaps looking for dark matter using a dense matter system like a block of germanium is bound to fail. We need to look using some sort of empty space experiment that gets to the vacuum conditions of interstellar (as we know dark matter exist on an interstellar scale).

An experiment might be to create a very hard vacuum starting with a hydrogen plasma, then as you pump down, look for some sort of indication that the charge of the remaining protons and electrons in the gas has gone down. You might look at the response of the p/e left in the chamber to photons – there will be less scattering as you pump down, but if the scattering falls off a cliff faster than your pumping rate you have made dark matter.

What is the distance at which this effect might happen at? In other words how far apart do electrons and protons have to be before the charge effect starts to stall? I am not talking about the range of photons – that’s infinite, but about the range of this effect – where will protons start to lose the signal from electrons, and calm down? 1m, 1micron? What is the density of gas in quiet parts of the galaxy? Intergalactic space is 1 atom/m3, I would say 1e6x this level is likely for some wastelands in the milky way. (we need dark matter in the milky way to get our velocity curves right!) So that’s 1 per cm3.

What’s the best vacuum you can make?

Ultra-high vacuum chambers, common in chemistry, physics, and engineering, operate below one trillionth (10^{−12}) of atmospheric pressure (100 nPa), and can reach around 100 particles/cm^{3 }

That’s about the right density. So has anyone ever measured laser scattering in such a chamber as a function of pressure? Corrected for pressure, we would get a horizontal line in a suitable graph. Boring stuff, it would seem, so likely not measured. The mean free path is 40km in these chambers.

**Some problems solved by this ‘dark matter is matter gone dark’ hypothesis:**

1) Early universe. It has been determined that the early universe must have had a mass that was much larger than the observed mass today. This is solved with dark matter, but that dark matter would have had to take part in things. If it were instead all just regular matter, there is no problem.

2) Early universe clumpiness: Its been really hard to come up with galaxies born so quickly. Yet they can be seen with telescopes. With all the matter in the early universe taking part, clumps are easier to make.

3) The lack of dark matter peaks at galactic cores. This one stumps the experts – physicists were sure that dark matter would accumulate at galactic cores, but it does not. If you have matter lighting up as it moves close to the core, then the radiation given off by this newly lit matter would keep things expanded, furthermore it is seen at the core, and so does not count as being dark. (http://www.cfa.harvard.edu/news/2011-29)

**Early universe CMB**

This is the way things are thought to work.

If all the matter was lit, then the He4/Li levels would be not what is observed. ==> Some kind of non interacting matter was needed.

The CMB is too smooth. Dark matter is needed to make galaxies:

Dark matter condenses at early epoch and forms potential wells, the baryonic matter flows into these wells and forms galaxies (White & Rees 1978). (Ref: http://ned.ipac.caltech.edu/level5/Sept09/Einasto/Einasto4.html)

Previous posts have all not mentioned quantum effects at all. That’s the point – we are building physics from General Relativity, so QM must be a consequence of the theory, right?

Here are some thoughts:

QM seems to not like even *special* relativity much at all. It is a Newtonian world view theory that has been modified to work in special relativity for the most part, and in General Relativity not at all.

There are obvious holes in QM – the most glaring of which is the perfect linearity and infinitely expandable wave function. Steven Weinberg has posted a paper about a class of QM theories that solve this problem. In essence, the solution is to say that the state vector degrades over time, so that hugely complex, timeless state vectors actually self collapse due to some mechanism. (Please read his version for his views, as my comment are from my point of view.)

If one were to look for a more physical model of QM, something along the lines of Bohm’s hidden variables, then what would we need:

**Some sort of varying field that supplies ‘randomness’:**

- This is courtesy of the monopole field discussed in previous posts about the proton and the electron.

**Some sort of reason for the electron to not spiral into the proton:**

- Think De Broglie waves – a ‘macroscopic’ (in comparison to the monopole field) wave interaction. still these waves ‘matter waves’ are closely tied to the waves that control the electromagnetic field.
- Put another way – there is room for many forces in the GR framework, since dissimilar forces ignore each other for the most part.
- Another way of thinking about how you talk about multidimensional information waves (hilbert spaces of millions of dimensions for example), is to note that as long as there is a reasonable mechanism for keeping these information channels separate, then there is a way to do it all with a meta field – GR.

**Quantum field theory:**

- This monopole field is calculable and finite, unlike the quantum field theories of today, which are off by a factor of 10
^{100}when trying to calculate energy densities, etc.

History has showed us that all physical theories eventually fail. The failure is always a complete failure in terms of some abstract perfectionist viewpoint, but in reality, the failure only amounts to small corrections. Take for instance gravity. Newton’s theory is absurd – gravity travels instantly, etc. But it is also simple and powerful, it predictions working well enough to put people on the Moon.

Quantum Mechanics, it would seem, has a lot of physicists claiming that ‘this time is different’ – that QM is ‘right’. Nature does play dice. There are certain details of it yet to be worked out, like how to apply it to fully generalized curvy spacetimes, etc.

Lets look at what would happen if it were wrong. Or rather, lets look at one way that it could be wrong.

QM predicts that there are chances for every event happening. I mean in the following way – there is a certain probability for an electron (say) to penetrate some sort of barrier (quantum tunneling). As the barrier is made higher and or wider, the probability of tunneling goes down according to a well defined formula: (see for example this wikipedia article). Now, the formulas for the tunneling probability do not ‘top out’ – there is a really, really tiny chance that even a slowly moving electron could make it through a concrete wall. What if this is wrong? What if there is a limit as to the size of the barrier? Or put another way – what if there is a limit to probability? Another way to look at this is to say that there is a upper limit on the half life of a compound. Of course, just as Newton’s theory holds extremely well for most physics, it may be hard to notice that there is not an unlimited amount of ‘quantum wiggle’ to ‘push’ particles through extremely high barriers.

Steven Weinberg has posted a paper about a class of theories that try to solve the measurement problem in QM by having QM fail. (It fails a little at a time, so we need big messy physics to have the wave collapse). I agree fully with his idea – that we have to modify QM to solve the measurement problem.