Science

Erik Verlinde Says We Don’t Need Dark Matter. His Reasoning is More Mysterious.

Verlinde has been claiming that dark matter is unnecessary to explain the effects that’ve been attributed to it. Instead, he offers a new theory of gravity.

An image of the Bullet Cluster. Though Verlinde's theory has scored a small preliminary win, it will have to explain the anomalous consequences of the Bullet Cluster collision. Credit: Wikisky

An image of the Bullet Cluster. Though Verlinde’s theory has scored a small preliminary win, it will have to explain the anomalous consequences of the Bullet Cluster collision. Credit: Wikisky

On September 23, 1846, Johann Gottfried Galle and Heinrich d’Arrest, two astronomers at the Berlin Observatory, discovered the planet later called Neptune. However, the discovery is attributed to the French mathematician Urbain Le Verrier, who used Isaac Newton’s theory of gravity to predict that Uranus’s strange orbit around the Sun could be accounted for if there was a planet in an orbit beyond it. And he was right.

On that Wednesday, or probably the day after, Le Verrier would’ve been thrilled: his first prediction of Neptune’s existence hadn’t brought him the attention he thought it would have; now, France wanted to call it ‘Le Verrier’s Planet’. Wanting to capitalise on this turn of fortunes, Le Verrier then shifted his gaze towards Mercury, which also had (and continues to have) a strange orbit.

Today, astronomers call it the precession of the perihelion. Mercury’s orbit is elliptical. Perihelion is the point in a planet’s orbit when it is closest to its star. Precession is the movement of the axis of a spinning object. Together, the ‘precession of the perihelion’ implies that the orientation of Mercury’s elliptical orbit around the Sun is changing.

Le Verrier’s success with Neptune was admittedly due to the Frenchman assuming that Newton’s laws were perfect. And after being rewarded for it, Le Verrier assumed once again that they would be able to explain Mercury’s orbital precession if there was another planet between it and the Sun. In 1859, he called it Vulcan. It hasn’t been found to this day. The precession problem was later resolved by Albert Einstein’s general theory of relativity, in 1915.

Le Verrier, in trying to solve a problem based on the assumption that a theory of gravity had to be right, predicted the existence of two new planets. One was found while the other knocked back against his assumption. Then again, predicting planets is one thing – predicting particles is entirely another.

Verlinde and Vulcan

Most cosmologists today concur that over 95% of the universe is made of mysterious entities called dark energy and dark matter. Neither has been found – but the belief in their existence has been multiply reinforced by different discoveries that imply their existence.

But more importantly, like Le Verrier predicted Neptune assuming Newton had to be right, astrophysicists as far apart in time as Fred Zwicky and Vera Rubin predicted the existence of dark matter assuming Einstein had to be right. And while Le Verrier said there would be a planet, physicists today say there are dark matter particles. But are there, really?

Multiple searches for these particles have turned up blank. But that doesn’t mean they’re not there: different experiments scan different energies of the universe for signs of the particles. If one range shows nothing, we move on to another, or even probe the same range with a more sensitive instrument. There is hope that something will be found. Something has to be, eh?

A Dutch string theorist named Erik Verlinde thinks otherwise. According to him, dark matter needn’t exist, that its supposed existence is because we haven’t explored Einstein’s theories fully. For Verlinde, dark matter is Le Verrier’s Vulcan, waiting to be corrected by a more advanced theory of gravity. He also claims he has the math to back himself up.

In January 2010, Verlinde submitted a paper to the arXiv pre-print server called ‘On the Origin of Gravity and the Laws of Newton‘. Another paper was submitted on November 7, 2016. In them, his thesis is that both “spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory”, that they are not fundamental properties of this universe itself but the product of a deeper mechanism we are yet to discover.

Specifically, he says in the 2010 paper that gravity is an entropic force: the product of a system that wants to increase its entropy – and moves away from it to claim it is now about entanglement. Entanglement is a quantum mechanical phenomenon in which some properties of two objects become interdependent such that, even if the objects are far apart, measuring the properties of one instantaneously changes the properties of the other.

Taking recourse through string theory, quantum information theory and blackhole astrophysics, Verlinde argues that the anomalous changes in gravitational forces found in large galaxies – attributed to dark matter – are because of normal matter’s complex interactions with dark energy. Specifically: that when some dark energy is displaced from its position and tries to move back to it, it pushes on matter – which is then perceived as a gravitational force.

On the crackpot index

The Czech theoretical physicist Luboš Motl isn’t buying it, however, and picked up on the flipflop. “Well, it can’t be an entropic force because whenever entropic forces act, the entropy goes strictly up so the entropic forces are intrinsically irreversible,” he wrote on his blog, “while a comet may be closer to the Sun, further from the Sun, closer to the Sun, and reverse things just fine.” Later, about the entanglement: “He doesn’t have any particular field theory model similar to Einstein’s equations or any other effective quantum field theory. To summarise, he doesn’t have any model at all” to explain what he’s doing.

This view was echoed by Columbia University mathematician Peter Woit: “I haven’t computed a John Baez crackpot index for this, but I’d suspect it’s pretty high.” The index, conceived by Baez – a mathematical physicist at the University of California, Riverside – in 1998, awards points to a physicist based on the fulfilment of thirty-seven checks. Woit probably gave Verlinde the full 50 points for #37: “for claiming you have a revolutionary theory but giving no concrete testable predictions”.

Some others are cautious, even hopeful. Mark Van Raamsdonk, a physicist at the University of British Columbia, Vancouver, has said Verlinde’s thesis is moving in “definitely an important direction”. In its early days, Robbert Dijkgraaf, director of the Institute for Advanced Study, Princeton, had told Verlinde: “Something is happening here and this is going to have an impact”. And it seems Verlinde himself doesn’t mind the criticism: he said in a 2013 interview that “if you change the way people think, it creates some resistance”.

Both Woit’s and Motl’s views are likely to be referenda on the integrity of the Dutch press, which has been playing up Verlinde’s ideas in a nationalistic frenzy (not very unlike India’s local press jumping on chances to play up Indian achievements). Woit in fact says in a comment on his blog that Verlinde’s peers will ignore his papers “no matter how much press it gets”. It hasn’t helped either that he received EUR 18.5 million (Rs 130 crore) in 2012 to set up a theoretical physics institute to continue his work, as well as EUR 6.5 million (Rs 45.8 crore) in grants and donations over the last few years.

What then about the press for a new experiment that claims to have verified the emergent gravity idea? For one, it is preliminary. For another, it doesn’t challenge the inability of Verlinde’s theory to explain some other cosmological phenomena that dark matter ought to be able to.

The experiment was performed by astrophysicists from the Leiden Observatory and led by Margot Brouwer, a PhD candidate. They calculated how more than 33,000 galaxies catalogued before 2015 curved the path of light around themselves (a consequence of Einstein’s general theory) – and used it to infer the strength of gravitational forces at work. When they compared their results to what Verlinde has predicted the strength of his emergent gravity will be, there was but a slight mismatch.

Brouwer told New Scientist: “But then if you mathematically factor in the fact that Verlinde’s prediction doesn’t have any free parameters [like Le Verrier had one in the form of Vulcan], whereas the dark matter prediction does, then you find Verlinde’s model is actually performing slightly better.” Yes, it is encouraging, but no, still not a revolution.

In another show of (inadvertent) support, three American physicists published a paper in the journal Physical Review Letters on November 9 – two days after Verlinde’s arXiv upload – suggesting that the acceleration of stars orbiting a galaxy’s centre could be predicted by normal (technically, baryonic) matter alone. This conclusion contests one of the fundamental motivations for dark matter, that it influences the acceleration, too.

All sorts to make a MOND

However, Brouwer’s experiment is only a very small start while the Americans’ paper has started to see alternative interpretations emerge for its data. If emergent gravity exists, and if Verlinde provides the underlying scientific model that will function as a substrate upon which his grander visions are founded, he will still have a way to go.

The origins and prevalence of dark matter are closely tied to the cosmic microwave background, the universe-filling haze of heat (measuring -270.4º C) leftover from the Big Bang. Any changes proposed to the composition of dark matter thus have to account for corresponding changes in the microwave background as well.

Verlinde, on his count, builds upon some of the ideas of the theoretical framework called modified Newtonian dynamics (MOND), pioneered by Israeli physicist Mordehai Milgrom. While MOND has been able to predict how the universe’s first massive galactic clusters and voids could have formed, it has been more controversial than the competing Lambda–Cold Dark Matter (LCDM) model, which does a better job of explaining the radiation emitted by galaxies, for example.

But cosmologists don’t know how LCDM could account for some of MOND’s correct predictions, nor do they know what MOND’s parent theory could be like so it can make room for the speed of light. This is why it is also significant for a theory like Verlinde’s to be able to explain experimental observations that push back on both MOND and LCDM ideas. A famous real example of this is the Bullet Cluster.

About 150 million years ago, in the constellation Carina, a smaller sub-cluster of galaxies (the ‘bullet’) began to pass through a larger one at 9.6 million km/hr, pummelling a small cloud of gas at 70 million degrees celsius into a larger one at 100 million degrees celsius. The stars of both galaxies were slowed down but were otherwise unaffected. The clouds of gas (i.e. normal matter) were slowed down even further by electromagnetic forces and emitted X-rays.

The dark matter, which by its (admittedly inchoate) definition interacts weakly with normal matter through the gravitational force, evaded the clouds and remained in two separate clumps. Effectively, the stars, the gas clouds and the dark matter all came away separately, leaving them exposed for individual scrutiny.

When scientists calculated where the centre of mass in the new scheme of things was, they found that it had become displaced by a distance that ‘normal’ gravity alone couldn’t explain. There had to be significant amounts of dark matter in the region. Milgrom then argued that MOND could explain the discrepancy – except that his results were off by a factor of 100 (although without a MOND-like theory in the picture, the discrepancy was closer to a factor of 10 billion).

On the other hand, X-ray and gravitational lensing data from the collision have showed up LCDM’s inability to explain the large velocity at which the clusters would have had to collide (although some studies – here and here, for example – have argued that the Bullet Cluster could be an exception to the rule). Other examples of similarly perplexing clusters are the Abell 520 and the MACS J0025.4-1222.

So even if Verlinde’s idea may have received a small leg-up through Brouwer’s gravitational lensing test, it is still very wanting in both theory and experiment. In fact, even if string theory does portend the post-empirical era – a time when a theory could be held to be real without it making any experimentally testable predictions – its theoretical underpinnings are still controversial at least.

Perhaps it is not yet Le Verrier’s Vulcan after all.

Note: On December 22, the response to the Americans’ paper was reworded to say “alternative interpretations” instead of “challenge”.

Subscribe to The Wire‘s weekly science newsletter, Infinite in All Directions, where our science editor curates interesting science news, blogs and analyses from around the web.

  • Pentcho Valev

    “The precession problem was later resolved by Albert Einstein’s general theory of relativity, in 1915.”

    No. The theory was adapted, empirically, to the known-in-advance result. Here Michel Janssen describes endless empirical groping, fudging and fitting until “excellent agreement with observation” was reached:

    Michel Janssen: “But – as we know from a letter to his friend Conrad Habicht of December 24, 1907 – one of the goals that Einstein set himself early on, was to use his new theory of gravity, whatever it might turn out to be, to explain the discrepancy between the observed motion of the perihelion of the planet Mercury and the motion predicted on the basis of Newtonian gravitational theory. […] The Einstein-Grossmann theory – also known as the “Entwurf” (“outline”) theory after the title of Einstein and Grossmann’s paper – is, in fact, already very close to the version of general relativity published in November 1915 and constitutes an enormous advance over Einstein’s first attempt at a generalized theory of relativity and theory of gravitation published in 1912. The crucial breakthrough had been that Einstein had recognized that the gravitational field – or, as we would now say, the inertio-gravitational field – should not be described by a variable speed of light as he had attempted in 1912, but by the so-called metric tensor field. The metric tensor is a mathematical object of 16 components, 10 of which independent, that characterizes the geometry of space and time. In this way, gravity is no longer a force in space and time, but part of the fabric of space and time itself: gravity is part of the inertio-gravitational field. Einstein had turned to Grossmann for help with the difficult and unfamiliar mathematics needed to formulate a theory along these lines. […] Einstein did not give up the Einstein-Grossmann theory once he had established that it could not fully explain the Mercury anomaly. He continued to work on the theory and never even mentioned the disappointing result of his work with Besso in print. So Einstein did not do what the influential philosopher Sir Karl Popper claimed all good scientists do: once they have found an empirical refutation of their theory, they abandon that theory and go back to the drawing board. […] On November 4, 1915, he presented a paper to the Berlin Academy officially retracting the Einstein-Grossmann equations and replacing them with new ones. On November 11, a short addendum to this paper followed, once again changing his field equations. A week later, on November 18, Einstein presented the paper containing his celebrated explanation of the perihelion motion of Mercury on the basis of this new theory. Another week later he changed the field equations once more. These are the equations still used today. This last change did not affect the result for the perihelion of Mercury. Besso is not acknowledged in Einstein’s paper on the perihelion problem. Apparently, Besso’s help with this technical problem had not been as valuable to Einstein as his role as sounding board that had earned Besso the famous acknowledgment in the special relativity paper of 1905. Still, an acknowledgment would have been appropriate. After all, what Einstein had done that week in November, was simply to redo the calculation he had done with Besso in June 1913, using his new field equations instead of the Einstein-Grossmann equations. It is not hard to imagine Einstein’s excitement when he inserted the numbers for Mercury into the new expression he found and the result was 43″, in excellent agreement with observation.”

    Special relativity was deductive (even though a false postulate and an invalid argument spoiled it from the very beginning) but general relativity was an empirical model, analogous to the empirical models defined here:

    Quote: “The objective of curve fitting is to theoretically describe experimental data with a model (function or equation) and to find the parameters associated with this model. Models of primary importance to us are mechanistic models. Mechanistic models are specifically formulated to provide insight into a chemical, biological, or physical process that is thought to govern the phenomenon under study. Parameters derived from mechanistic models are quantitative estimates of real system properties (rate constants, dissociation constants, catalytic velocities etc.). It is important to distinguish mechanistic models from empirical models that are mathematical functions formulated to fit a particular curve but whose parameters do not necessarily correspond to a biological, chemical or physical property.”

    Pentcho Valev

  • http://cosmic.lifeform.org/ Thomas Lee Elifritz

    It’s nice to finally see an honest, unbiased and well researched article in the popular press on this subject finally.

    However, one key point is missing. The well motivated, guaranteed to exist, and now ready for prime time – cosmic QCD axion.

    I’m writing up the results of my personal adventure in this realm right now. You all know where to find me, and it.

  • Silnor

    Both the science and the way it is presented are fascinating. A pertinent example here is the statement “the Americans’ paper has already [b]een challenged.” This is not a correct depiction of the situation. As the American first author of said paper, I am intimately familiar with both it and the paper that is cited as “challenging” it. The paper we wrote is a data paper. It shows what the data for galaxies do, irrespective of how one interprets those data (dark matter/MOND/Verlinde/flying monkeys). The “challenging” paper merely presents an interpretation of our data in the context of dark matter. That is not a challenge; it is a theoretical attempt to explain the data – exactly the kind of work we hoped to inspire. Depicting this as a “challenge” makes it sound like one paper contradicts the other. A quick fact check: our paper has been thoroughly refereed, and passed a number of very high hurdles that fewer than 1% of submitted papers pass. The “challenging” paper was submitted to the arXiv unrefereed. There have been a number of further preprints along the same line that have appeared. These are haggling over interpretation, not the data.

    • The Wire

      Thanks for the comment, Silnor. You’re right – I apologise for the overreaching language in the article. I’ve now reworded it to say “… the Americans’ paper has started to see alternative interpretations emerge for its data”.–VM

      • Silnor

        Thanks. Much appreciated.

        – Stacy

  • Silnor

    Both the science and the way it is presented are fascinating. A pertinent example here is the statement “the Americans’ paper has already [b]een challenged.” This is not a correct depiction of the situation. As the American first author of said paper, I am intimately familiar with both it and the paper that is cited as “challenging” it. The paper we wrote is a data paper. It shows what the data for galaxies do, irrespective of how one interprets those data (dark matter/MOND/Verlinde/flying monkeys). The “challenging” paper merely presents an interpretation of our data in the context of dark matter. That is not a challenge; it is a theoretical attempt to explain the data – exactly the kind of work we hoped to inspire. Depicting this as a “challenge” makes it sound like one paper contradicts the other. A quick fact check: our paper has been thoroughly refereed, and passed a number of very high hurdles that fewer than 1% of submitted papers pass. The “challenging” paper was submitted to the arXiv unrefereed. There have been a number of further preprints along the same line that have appeared. These are haggling over interpretation, not the data.