Archives For particle physics

Yves Couder’s (and others) experiments with small (in the human sense) and absolutely huge (in the quantum sense)  silicon oil droplets and baths have proven to be a wonderful analog for quantum mechanics.

There are many researchers who think that these experiments show something much more – they hint at what the microscopic quantum world is really like. The quantum like effects occur when the driving force and frequency of the system are carefully tuned. When the conditions are right, the drops interact with their own waves – long after the waves have been emitted. Couder calls this behaviour the ‘high memory regime’ – its where all the quantum like behaviour emerges.

So the question becomes – what is the memory of a real quantum system? The answer to that question is surprisingly simple. Its infinite. Quantum states can entangle and ‘live’ forever. This fact is the foundation of Quantum Computing, the Many Worlds Theory and many other absurdities (Schrödinger’s cat…). Indeed the only point in QM where memory is not complete and infinite is at the point of measurement. But measurement is in the eye of beholder, and thus we need not worry about the measurement problem here. Or rather we will attempt to solve the measurement problem with a new hypothesis – that the memory of real quantum systems are limited, and that this limit is responsible for the collapse of the wave function.

This of course could kill or seriously limit the reach of quantum computing, and would provide a quick end to the Many Worlds Theory, and many many other consequences of quantum mechanics. Indeed Hilbert Space itself would lose its ‘reality’ – becoming nothing more than a mere mathematical trick for ‘memory intact’  (AKA pre-collapse) states.

What is the form of the memory? In Couder’s experiments its simply the range of an emitted wave in meters. Since his test trays are small, this means that the waves can bounce off the walls and interact with the emitter again.

We can look at such a system as a particle in a well. In Couder’s experiments you can see excited states decay after a time, and this time is increased as the memory of the system is increased.

So if we look at the simplest alpha_emissionphysical analog of this – a particle in a well that can quantum tunnel out – we have  Alpha – emission. These particles are trapped in the nucleus, but sooner or later they tunnel out.

Thus tunnelling is a collapse of the wave function – these alpha particles leave fossil traces in rocks for instance, so they have been emitted in a very real sense.

Of course the pure QM follower will tell you that each emitted alpha is just another cat in a box - and that the entire history of the world hinges on you (or is that any smart person?) looking at the actual billion year old track - only then does the linear superposition of uncountable 10Millions of state vectors collapse. Kind of hilarious, but that is what a truly linear system will do to you if you push it!

What causes the emission? The wave function has presence inside and outside of the barrier, so it can ‘feel’ that there is a lower energy state out there waiting for it. In a real pilot wave sense the pilot wave extends into the region beyond the barrier. We have a series of waves inside a femto metre sphere or so, and they bounce around for a few years (or 1024) years, or 10-6 seconds.

So a large variation of lifetimes – yet the playground is almost the same size, its the energy levels that are different, but only by a small factor. The greater amount of the wave function that is outside the nucleus, the shorter half life.

What really happens? Is it that the particle keeps inside the nucleus, and as soon as it randomly happens to walk out it is released? In ‘real QM’ the wave function only gives a probability for finding the alpha outside the nucleus, so in some sense its ‘constantly’ out there. But in a realist theory the alpha has a real velocity inside and around the nucleus. This could perhaps be a real difference – perhaps if we postulate a fixed speed of the alpha on a random walk through the probability field, we can connect the lifetime to the percentage of the wave function that is outside the nucleus. See

Unpredictable Tunneling of a Classical Wave-Particle Association

So if a certain percentage of paths is outside, and the particle covers … do the calculation – random walk – step length is some distance much less than the nucleus size, speed v, then typical time to get out would be defined.  perhaps with the speed held constant, we can determine step length by looking at the size of the region of probability outside the nucleus, we can determine the speed/step length that is implied. Someone must have done this?

http://demonstrations.wolfram.com/GamowModelForAlphaDecayTheGeigerNuttallLaw/

So in the playtime circa 1900 flat spacetime where QM currently works, there are no non – local effects and QM makes sense. This is why most theorists like the quantization of gravitation program – it would bury the annoying real 4D version of spacetime underneath many levels of obscure mathematics.

How to make Dark Matter

October 20, 2013 — 2 Comments

I don’t divulge the recipe until later, lets start with the most undark matter we can find – CERNs protons.

CERN has proton – antiproton collisions going on at 7 TeV. There are collisions that generate up to a few TeV of photons.

Lets look at that from a viewpoint of classical physics, with some General Relativity added in the right place.

We have a few TeV of photons, these are generated in an extremely short period of time. We have two protons approaching and hitting (basically head on to get 2TeV of gammas). They are travelling at c. So that’s an interaction time of 2fm/3e8 m/s – 1.5 e-24 seconds.

So what happens gravitationally?

I have recently read a paper Monopole gravitational waves from relativistic fireballs driving gamma-ray bursts by Kutshera (http://arxiv.org/abs/astro-ph/0309448) that talks about this effect for, well exploding stars.

We have in a small area a mass of 7 TeV, of which about half leaves via gammas, the rest is in ‘slower’ particles like those higgs bosons, etc. This drop in mass results in a monopole gravitational wave. How big:

The force of Gravity is usually determined by the masses of the objects involved. But gravity is a local phenomenon (Einstein’s vision, not Newtons), and the field is actually a gradient of the potential.

So we have a potential change from 7 TeV to 5 TeV as seen by an observer near the collision as 2 TeV of gammas go whizzing by in a time span of 10-24 secs. Lets take the observer to be just outside the interaction area, say 10 fm away.

The gradient of the potential changes as the mass changes, which means its time dependent. We need the gradient.

Look at the Gravitational potential  of the observer before and after the wave passes.

Before G(7 TeV)/10fm and after we have G(5 TeV)/10fm. So that’s an potential difference of G(2TeV)/10fm acting over a time of 1e-24 seconds, which means that we have a gradient of (some math. )SI units! Observer is a proton 10fm away,

I get 8.1×10-20 Watts – i.e. the observer proton sees its energy rise at a rate of 10-19 watts for 1e-24 seconds, it gets a boost in the away from the interaction, which raises its energy by a mere  5e-25eV.

Not much. But what I think is missing is that this sort of effect has to be looked at on a much smaller scale, and repeating, in that this monopole gravitational energy is coming in – then bouncing back out. The proton is thus an engine to this coherently at 1e40Hz or more, which makes other protons/electrons feel a force (they are bouncing this gravitational monopole radiation back and forth too) of the same size as the coulomb force. So this is the coloumb force. Electromagnetism as a phenomena of General Relativity. If you re-do the math with 10-47 or so seconds as the period then you start to see coulomb level forces at play. (Taking away accelerator energies ‘only’ adds a few zeros to the huge frequency requirement for mass exchange.)

The coloumb force rides above this – its a meta field ontop of this gravitationally built monopole system.

I think that electrons do this in a native, compact manner, likely using topology, while protons employ a complicated-ish ‘engine’ built of springs and struts made of GR that produce the same force as an electron. The strength of this force is determined by a feedback mechanism to balance that of the electrons.

Could dark matter be unlit(inactive/relaxed) protons? In other words protons that are not near an electron, and thus stop vibrating and being a charged particle. No near electron means no feedback means no charge. So perhaps looking for dark matter using a dense matter system like a block of germanium is bound to fail. We need to look using some sort of empty space experiment that gets to the vacuum conditions of interstellar (as we know dark matter exist on an interstellar scale).

An experiment might be to create a very hard vacuum starting with a hydrogen plasma, then as you pump down, look for some sort of indication that the charge of the remaining protons and electrons in the gas has gone down. You might look at the response of the p/e left in the chamber to photons – there will be less scattering as you pump down, but if the scattering falls off a cliff faster than your pumping rate you have made dark matter.

What is the distance at which this effect might happen at? In other words how far apart do electrons and protons have to be before the charge effect starts to stall? I am not talking about the range of photons – that’s infinite, but about the range of this effect – where will protons start to lose the signal from electrons, and calm down? 1m, 1micron? What is the density of gas in quiet parts of the galaxy? Intergalactic space is 1 atom/m3, I would say 1e6x this level is likely for some wastelands in the milky way. (we need dark matter in the milky way to get our velocity curves right!) So that’s 1 per cm3.

What’s the best vacuum you can make?

Ultra-high vacuum chambers, common in chemistry, physics, and engineering, operate below one trillionth (10−12) of atmospheric pressure (100 nPa), and can reach around 100 particles/cm

That’s about the right density. So has anyone ever measured laser scattering in such a chamber as a function of pressure? Corrected for pressure, we would get a horizontal line in a suitable graph. Boring stuff, it would seem, so likely not measured. The mean free path is 40km in these chambers.

Some problems solved by this ‘dark matter is matter gone dark’ hypothesis:

1) Early universe. It has been determined that the early universe must have had a mass that was much larger than the observed mass today. This is solved with dark matter, but that dark matter would have had to take part in things. If it were instead all just regular matter, there is no problem.

2) Early universe clumpiness: Its been really hard to come up with galaxies born so quickly. Yet they can be seen with telescopes. With all the matter in the early universe taking part, clumps are easier to make.

3) The lack of dark matter peaks at galactic cores. This one stumps the experts – physicists were sure that dark matter would accumulate at galactic cores, but it does not. If you have matter lighting up as it moves close to the core, then the radiation given off by this newly lit matter would keep things expanded, furthermore it is seen at the core, and so does not count as being dark. (http://www.cfa.harvard.edu/news/2011-29)

Early universe CMB

This is the way things are thought to work.

If all the matter was lit, then the He4/Li levels would be not what is observed. ==> Some kind of non interacting matter was needed.

The CMB is too smooth. Dark matter is needed to make galaxies:

Dark matter condenses at early epoch and forms potential wells, the baryonic matter flows into these wells and forms galaxies (White & Rees 1978). (Ref: http://ned.ipac.caltech.edu/level5/Sept09/Einasto/Einasto4.html)

Can’t be done, it would seem, since gravity is spin 2.

Well, electromagnetism is spin 1, but we have tech gadgets and a billion transistors on one chip.

So can one construct a machine that behaves like a dipole?

Take a canonical dipole. Two radio antennas, both vertical, one transmitting, the other receiving. The question then is, can we make a mass (or more likely a Rube Goldberg system of masses) bob up and down by the action of another mass-system moving at some distance away? if we can, then we have constructed a ‘spin one’ field from gravity, in much the same way that one can build something that is more than its parts.

The underlying field would of course be spin 2, but the field interpreted from the motions of our mass systems would look like a covariant, fully geometric compliant spin 1 field. It would in fact be a spin 1 covariant field.

Contraptions and questions come to mind right away. How do normal gravitational waves radiate as the eccentricity of an orbit approaches 1? What about a similar structure but with say a small particle orbiting a slender rod along the long axis. Not looking for stable orbits here at all. Just a mechanism to transfer a dipole motion across empty space to another construction of masses.

It seems more than possible that such an arrangement exists.

 

 

Take this size of an electron as the ‘black hole’ size. That is about 10-55 m I think. Then for a solid, we have about 3.35*1028 molecules water per m cubed, h2O, so 7e**29 electrons / m**3 – say 1e30 electrons per cubic metre. With a 1.48494276 × 10-27 m / kg conversion constant for the Schwarchild radius of an object measured in kg, and an electron mass of 9.10938291(40)×10−31, we get diameter of 2.6 10-58m, so cross section is 7e-116 and then the total area of a cubic metre of water is about 1e-85 m**2/m

So what is the neutrino cross section.

Say neutrinos only interact with electrons when they hit the actual black hole part. Also assume that neutrinos are much smaller than electrons.

How many meters of water would a ray penetrate before hitting an electron within its -black hole radius?

1e-85 m**2, which works out to a coverage of one part in 1e85, so 1e85 meters would ensure a hit.  This is vastly larger than the real distance, which is only a few light years, 1e17 m or so.

So I guess that this idea is very wrong on some counts.

If you use the radius of the electron as a kerr naked ring singularity, you get 1e-37 metres, or  1.616199 × 10-35 meters, ie te planck length. 

Then with these planck length sized electrons, you get about 1e-70 – which is about 1e-40 m**2/metre of water, still not enough, but closer.

 

Funny how the kerr radius of an electron mass naked singularity is the planck mass.

 

Trying a compton size of 2e-12m instead of 2e-56, makes the

 

 

Electrons exist as small black hole – like things which turn on and off at huge frequencies, and Birchoffs theorem is used to create electrostatics (indeed electrodynamics) using nothing but monopole gravitational waves. (see previous post).

So there exists a field of vibrating humps of gravitational potential (a.k.a dark energy or dark matter?) that fills space. It is at rest in the universe, and forms a frame of reference – not really an ether, as relativity still works fine. More like the cosmic microwave background.

Protons are different
So electrons repulse each other. How do protons work?. They are massive, 2000 times heavier, and have a known size of about a fm (10-15 meters).

So given this hilly landscape of varying potential, is there any other way to get purchase? In other words how do you do what an electron does given that huge radius and 2000 times the mass?

The frequency of the field can be approximated in the following way:

Involve the two constants ‘G’ and ‘Q’. You get a frequency along the lines of 1065 Hz

for two electrons separated by d:

me2G/(2d2)*K = Q2/(4*pi*E*d2),

where we know that K – the ratio of gravity to electric force on electrons is about 8.3×1042. K is unitless. suppose K is actually w*r/c, where r is some nuclear radius. With an radius r of about a fm, we get a frequency of 1065 Hz. Another way to think of this is that the light travel time across a black hole the mass of an electron is also 10-65 seconds.

This huge frequency implies a wavelength of a tiny 10-57 meters. So in the diameter of a proton, we have 1042 waves. There are an incredible number of these waves boiling inside the proton.

The proton needs to ‘latch’ onto these waves, with the same force as an electron, but it does it in a completely different manner – it uses not a disappearing act, but some mechanism that keeps the mass elements of the proton preferentially in the wells – which has the same effect as the electron’s disappearing act, but much harder to achieve, and thus requires 2000 times the mass. In fact the proton only has to do things 1/2000 as well as an electron per unit mass – so the effect can be quite weak, (e.g. hit 2001 times and miss 2000 times).

So the proton uses a factory technique, where all the parts (how many.?) move around so as to be in the right place at the right time, slightly more often than not.

Why is the charge so balanced then? A question for another day.

Thought experiment, that is…

Take a gravitational well created by any object. Simple Schwarzschild solution. There is a test particle at some distance r away from the source.

Now imagine that the source disappears. Really just ‘goes away’ – violating the conservation of stuff. (The source mass of course could be going away for a temporary time,  quantum – style, or could be using a wormhole device to disappear – I’m not concerned here with the how or why this would happen).

The source disappears over a short time. (This would create a monopole gravitational wave).

There are two potential energies for the test mass – the potential energy when its in the well, and then the potential energy when the well is gone. The difference is of course just G*MsMt/r. During the disappearance of Ms (source mass) the total energy of the test particle would remain the same, so the kinetic energy of the test particle would rise as the PE tended to zero.

So that’s 1/2MtV2 = GMsMt/r

V = sqrt(2GMs/r) – the escape velocity – makes perfect sense. (it would be towards the place where Ms was, but everything here is talked about in such a short period of time that the test particle never gets to move much)…

So now, imagine that the source mass (Ms) appears again. If you left everything else alone, the test particle would of course slow back down and again be parked stationary in the potential well.

So lets change that. Say, in this world of disappearing masses, that now, in an act of symmetry, the test particle has taken its turn and has now ‘gone away’ during the re-inflation of Ms. So now you have Ms back, and the test particle magically appears in the well. Lets not worry about the energy needed to get back into the potential at this point.

Of course, now we are back at the initial conditions, and we repeat:

  • Ms – disappears.
  • Mt has a KE boost of the escape velocity.
  • So Mt is getting a KE boost of the escape velocity at each cycle.

In fact, repeat the whole process at about 1065 hz. (see this post for a calculation of this frequency) (2014 edit – Perhaps this frequency is way off… see May 2014).

Then you have the capability to produce an acceleration of 1042 TIMES the normal classical gravitational acceleration on an object. Take Ms and Mt to both be the mass of the lightest charged particle, the electron. In the example above, I guess one of the particles is a positron since there is a net attraction. Attraction vs repulsion is a phase thing here. If both particles disappear and re-appear at the same time (well with speed of light taken into account between them), then you would have repulsion.

This is the source of the electric charge: the Coulomb field is a consequence of Gravity – a phenomenon, not a fundamental field.

Obviously not a complete model at this point!

Here are some nice things about this:

  • Obviously covariant, GR friendly (as long as you can stomach the varying mass thing).
  • If correct, things like the Maxwell equations should drop out. That would be a telling feature.
  • It forms a way to unify gravity with the other forces of nature.
  • It does not use the well worn QFT as a starting point, which has never really amounted to much.
Maxwell Equations
We now have a coulomb strength field with repulsion and attraction (caused by different phase locking). This is set in a covariant GR framework. Maxwells equations can be determined from Coulomb’s law and Special Relativity : see for example this paper by Richard E Haskell.
Questions:
  • Why the phase lock?
  • What about QED and its exact predictions?
  • What is the mechanism that controls the mass swings?
  • What about the ‘other’ properties of the electron – the gyromagnetic ratio, etc.
  • Can this model be used for nuclear forces as well?
  • What about quantum effects? Can time and energy be used at these scales?
Hints to answers:
  • Perhaps phase lock is the wrong way to think about the interaction, and something more like QED provides a better way to think about repulsion vs attraction, etc.
  • QED is modeled with the exchange of precisely timed phase clocks – the physical model of this may be the pulse exchanges outlined above.
  • General Relativity does not tell us how space is connected. It may not be simply connected.
  • The gyromagnetic ratio of the electron can be found to be 2 from several papers on gravitational models of the electron – those papers assume a classical model for charge, but still may hold. The extremely high frequency of this effect means that on a scale of even femtoseconds we have 1028 oscillations – likely can ignore many effects, and again treat the electon as if it has a classical charge.
  • Nuclear forces may be a result of real, actual,  particles interacting at distances close enough that non – linear effects and the full theory of General Relativity need to be taken into account. Perhaps get numerical relativists to work on this.
  • Quantum mechanics may be a phenomenon of a multiply connected GR universe, with all the fast clocks and wormhole like behaviour providing enough room to create a (now extant) hidden variables theory of QM.
  • Perhaps the Proton participates in this dance with a much more complicated set of machinery – and is – say not multiply connected, or has a different structure, etc.
Obviously a big pill to swallow. But it does head down the road to integrating the forces of nature.
Tom Andersen
Meaford, On Canada
October 16, 2011 (with personal notes from 1995 – 2011)

I will show with a few simple equations how it could be that electrons and electromagnetic theory can be constructed from GR alone.

1) The electron is some sort of GR knot, wormhole or other ‘thing’, which has one property – its mass is moving from 0 to 2*me in a wave pattern. Well actually, the mass does not have to all b oscillating, it only changes the math slightly.

2) Due to the birkhoff theorem, the gravitational potential at any time is given by the amount of mass inside a certain radius.

3) Due to 2) above, we can use the simple gravitational formula to describe the potential.

\Phi(r,t)=2\frac{m_eG}{r}sin(\omega t)

This potential exerts a force that depends on the frequency of the varying mass, taking the derivative to get the slope of the potential holding r steady:

\frac{\partial}{\partial t}\Phi(r,t)=2\omega\frac{m_eG}{r}cos(\omega t)

With the mass changing, we have monopole graviational waves emanating (and incoming, since the universe is not empty), from such a structure.

The big assumption here is of course the varying mass of the electron. Where does the mass go? The obvious answer is through some sort of wormhole, so perhaps there is another electron somewhere else with the opposite phase of mass. Shades of the Pauli exclusion principle.

There are lots of places on the internet where one can find electron models where the the electron is modeled on some standing wave, which is what this really amounts to, since electrons would have a huge force on them if the incoming and outgoing are not balanced.

History has showed us that all physical theories eventually fail. The failure is always a complete failure in terms of some abstract perfectionist viewpoint, but in reality, the failure only amounts to small corrections. Take for instance gravity. Newton’s theory is absurd – gravity travels instantly, etc. But it is also simple and powerful, it predictions working well enough to put people on the Moon.

Quantum Mechanics, it would seem, has a lot of physicists claiming that ‘this time is different’ – that QM is ‘right’. Nature does play dice. There are certain details of it yet to be worked out, like how to apply it to fully generalized curvy spacetimes, etc.

Lets look at what would happen if it were wrong. Or rather, lets look at one way that it could be wrong.

QM predicts that there are chances for every event happening. I mean in the following way – there is a certain probability for an electron (say) to penetrate some sort of barrier (quantum tunneling). As the barrier is made higher and or wider, the probability of tunneling goes down according to a well defined formula: (see for example this wikipedia article). Now, the formulas for the tunneling probability do not ‘top out’ – there is a really, really tiny chance that even a slowly moving electron could make it through a concrete wall. What if this is wrong? What if there is a limit as to the size of the barrier? Or put another way – what if there is a limit to probability? Another way to look at this is to say that there is a upper limit on the half life of a compound. Of course, just as Newton’s theory holds extremely well for most physics, it may be hard to notice that there is not an unlimited amount of ‘quantum wiggle’ to ‘push’ particles through extremely high barriers.

Steven Weinberg has posted a paper about a class of theories that try to solve the measurement problem in QM by having QM fail. (It fails a little at a time, so we need big messy physics to have the wave collapse). I agree fully with his idea – that we have to modify QM to solve the measurement problem.