
An article published in the journal “Astronomy & Astrophysics” reports a method to measure the expansion velocity of the universe that takes into account the differences between the type Ia supernovae used. A team of researchers led by Nandita Khetan, Ph.D. student at the Italian Gran Sasso Science Institute and associated with the Italian National Institute of Nuclear Physics, proposed a method to calibrate the distances of those supernovae using the surface brightness fluctuations (SBF) of their host galaxies. The result is closer to the ones already calculated with other methods than the one obtained without that calibration. It doesn’t solve the problem of very different values of the so-called Hubble constant but suggests the possibility that the problem is due to instrumental inaccuracies and doesn’t require new physics.
The Hubble constant, as the expansion velocity of the universe is called, has been calculated in several different ways ranging from the use of cosmic microwave background radiation to that of supernovae, quasars, cepheids, red giants, and more. What was defined as a tension in the field of physics is caused by incompatible values, too different even taking into account their margins of error. One possibility is that we need to expand our knowledge of physics to find the explanation, but perhaps the problem is in the limits of our instruments. The image (Courtesy Khetan et. Al) shows different values of the Hubble constant calculated in the last few years in various ways.
One of the methods to calculate the Hubble constant is based on type Ia supernovae, the ones generated by explosions of white dwarfs that stole gas from nearby stars until they reach a critical threshold.
These measurements were based on calibrations made using variable stars called Cepheids present in the same galaxies. The most recent result has a probability peak at 73.2 km/s per megaparsec. It’s not only much higher than the 67.4 km/s per megaparsec calculated using the cosmic microwave background radiation mapped to ESA’s Planck Surveyor space probe but, even calculating the error margins of the two measurements, makes these measurements incompatible.
Nandita Khetan’s team tried to calculate the Hubble constant using a sample of 24 type Ia supernovae by calibrating their distances with another method, that of surface brightness fluctuations. The result is 70.5 km/s per megaparsec, a value that remains higher than those calculated with other methods but lower than the value calculated using Cepheid calibration.
The differences in results depending on the type of calibration used are considerable. This shows the difficulties that exist in calculating intergalactic distances with sufficient accuracy to obtain reliable measurements of the expansion velocity of the universe. The environment around supernovae could have a crucial influence because the Cepheids are found in spiral galaxies and not in elliptical galaxies, where the surface brilliance fluctuations method is generally used.
In essence, perhaps it’s too early to say that new physics is needed to solve the problem of measuring the Hubble constant. Let’s be clear, the discussions will continue because in recent years this value has been calculated using several different methods giving different results that have contributed to the tension. However, this new study suggests that there is some scope for interpreting the discrepancies.
If the James Webb space telescope is launched, at last, it will be possible to make new observations that should offer more precise measurements and, together with the Vera Rubin space observatory, take greater advantage of the surface brightness fluctuations method. Surely, in the meantime, there will be other developments around the Hubble constant problem and perhaps it will finally be possible to obtain precise results to solve the mystery.

Permalink