The fine structure constant, a fundamental constant, changes with time. The physical meaning of the fine structure constant

Colleague, there is a famous saying by Wolfgang Pauli that after death he will try to find out from Satan the meaning of the fine structure constant. Why exactly Satan?

Perhaps because, my friend, in Feynman's apt phrase, the very fact of the existence of this mysterious number is "a curse for all physicists." Indeed, for a long time (more than half a century), the physical meaning of this dimensionless constant remained the greatest mystery, because no one knew how this magic number appeared.

To deal with this, it is necessary to remember two constants:
- Kepler's constant: Kp = v^2*R, J*m/kg (or m^3/s^2) and
- Planck's constant: h = m*v*R, J*s (or kg*m^2/s).

If we substitute the minimum possible (modulo - maximum) value of the gravitational potential into the Kepler constant, we get the minimum possible orbital radius, which we call the gravitational radius (this radius is related to the gravitational field):

Rg \u003d Kp / c ^ 2, m.

If we substitute the value of the maximum speed into Planck's constant, we get another minimum possible radius, which we call the Compton radius (this radius is related to the electromagnetic field):

Rem \u003d h / (m * c), m.

The ratio of these radii for the hydrogen atom (the simplest case) gives us the value of the fine structure constant:

Rg / rem \u003d (Kp * m) / (h * c) \u003d a \u003d 1/137.036.

Colleague, is that all?

No, not everything, my friend. This is true (as already mentioned) only for the hydrogen atom, where the field mass is equal to the electron mass (m = me), and the gravitational radius is the so-called “classical electron radius” (rg = re). However, it is already clear from here that everything comes down to the ratio of the two minimum possible radii (gravitational and electromagnetic) in the potential field of the atom.

For many, the fine structure constant was a quantitative characteristic of only the electromagnetic interaction, but in fact it characterizes the ratio of the geometric parameters of the gravitational and electromagnetic fields.

The problem here is that many of us cannot recognize the real presence of gravity in the potential field of an atom, because, in accordance with the so-called "universal" law of gravity, the effect of gravity in the field of an atom is vanishingly small.

Fearing once again to question the "universal" law, we kind of "forget" that Kepler's laws work miraculously in the microfield (especially his third Law). And those physicists who used these "laws of the sky" in the field of the atom (Max Born, Eduard Shpolsky ...), unfortunately, can generally be counted on the fingers of one hand. Therefore, we continue to call the gravitational radius of the hydrogen atom the classical radius of the electron. And we are forced to recognize this as an indisputable FACT.

Colleague, what is the meaning of the fine structure constant for the general case?

The meaning remains the same: this amazing constant characterizes the ratio of the geometric parameters of the gravitational and electromagnetic fields.

However, it must be remembered that for the general case, the product of the field mass and the Compton electromagnetic radius is a constant value (follows from the elementary theory of the Compton effect):

M*rem = me*re/a = const

At the same time, the product of the field mass and the gravitational radius depends on the value of the electric charge of the field (follows from the well-known equation m*rg/q^2 = me*re/e^2 = 10^-7 kg*m/C^2) :

M*rg = me*re*Z^2, where Z = q/e.

Therefore, for the general case we have: rg = rem*Z^2*a, or rg/rem = Z^2*a.

How unimaginably strange the world would be if physical constants could change! For example, the so-called fine structure constant is approximately equal to 1/137. If it had a different value, then perhaps there would be no difference between matter and energy.

There are things that never change. Scientists call them physical constants, or world constants. It is believed that the speed of light $c$, the gravitational constant $G$, the electron mass $m_e$ and some other quantities always and everywhere remain unchanged. They form the basis on which physical theories are based and determine the structure of the universe.

Physicists are working hard to measure the world constants with ever-greater accuracy, but no one has yet been able to explain in any way why their values ​​are the way they are. In the SI system $c = 299792458$ m/s, $G = 6.673\cdot 10^(–11)N\cdot$m$^2$/kg$^2$, $m_e = 9.10938188\cdot10^( -31) $ kg - completely unrelated quantities that have only one common property: if they change at least a little, and the existence of complex atomic structures, including living organisms, will be in big question. The desire to justify the values ​​of the constants has become one of the incentives for the development of a unified theory that fully describes all existing phenomena. With its help, scientists hoped to show that each world constant can have only one possible value, due to the internal mechanisms that determine the deceptive arbitrariness of nature.

The best candidate for the title of a unified theory is M-theory (a variant of string theory), which can be considered consistent if the Universe has not four space-time dimensions, but eleven. Therefore, the constants we observe may not actually be truly fundamental. True constants exist in full multidimensional space, and we see only their three-dimensional "silhouettes".

OVERVIEW: WORLD CONSTANTS

1. In many physical equations, there are quantities that are considered constant everywhere - in space and time.

2. Recently, scientists have doubted the constancy of world constants. Comparing the results of observations of quasars and laboratory measurements, they come to the conclusion that chemical elements in the distant past absorbed light differently than they do today. The difference can be explained by a change of several millionths of the fine structure constant.

3. Confirmation of even such a small change will be a real revolution in science. The observed constants may turn out to be only "silhouettes" of the true constants that exist in multidimensional space-time.

Meanwhile, physicists came to the conclusion that the values ​​of many constants may be the result of random events and interactions between elementary particles in the early stages of the history of the universe. String theory allows for the existence of a huge number ($10^(500)$) of worlds with different self-consistent sets of laws and constants ( see Landscape of String Theory, In the World of Science, No. 12, 2004.). So far, scientists have no idea why our combination was selected. Perhaps, as a result of further research, the number of logically possible worlds will decrease to one, but it is possible that our Universe is only a small part of the multiverse, in which various solutions of the equations of a unified theory are implemented, and we observe just one of the variants of the laws of nature ( see Parallel Universes, In the World of Science, No. 8, 2003). In this case, for many world constants there is no explanation, except that they constitute a rare combination that allows the development of consciousness. Perhaps the universe we observe has become one of many isolated oases surrounded by an infinity of lifeless outer space - a surreal place where forces of nature completely alien to us dominate, and particles like electrons and structures like carbon atoms and DNA molecules are simply impossible. Trying to get there would have been fatal.

String theory was also developed to explain the apparent arbitrariness of physical constants, so its basic equations contain only a few arbitrary parameters. But so far it does not explain the observed values ​​of the constants.

Reliable ruler

In fact, the use of the word "constant" is not entirely legitimate. Our constants could change in time and space. If the extra spatial dimensions changed in size, the constants in our three-dimensional world would change with them. And if we looked far enough into space, we could see areas where the constants took on different values. Since the 1930s scientists have speculated that constants may not be constant. String theory gives this idea theoretical plausibility and makes the search for impermanence all the more important.

The first problem is that the laboratory setup itself can be sensitive to changes in constants. The size of all the atoms could increase, but if the ruler used for measurements also became longer, nothing could be said about the change in the size of the atoms. Experimenters usually assume that the measurement standards (rulers, weights, clocks) are unchanged, but this cannot be achieved when checking constants. Researchers should pay attention to dimensionless constants - just numbers that do not depend on the system of units, for example, the ratio of the mass of a proton to the mass of an electron.

Does the internal structure of the universe change?

Of particular interest is the quantity $\alpha = e^2/2\epsilon_0 h c$, which combines the speed of light $c$, the electric charge of the electron $e$, Planck's constant $h$, and the so-called vacuum dielectric constant $\epsilon_0$. It is called the fine structure constant. It was first introduced in 1916 by Arnold Sommerfeld, who was one of the first to attempt to apply quantum mechanics to electromagnetism: $\alpha$ relates the relativistic (c) and quantum (h) characteristics of electromagnetic (e) interactions involving charged particles in an empty space ($\epsilon_0$). Measurements have shown that this value is 1/137.03599976 (approximately 1/137).

If $\alpha $ had a different meaning, then the whole world would change. If it were smaller, the density of a solid composed of atoms would decrease (in proportion to $\alpha^3 $), molecular bonds would break at lower temperatures ($\alpha^2 $), and the number of stable elements in the periodic table could would increase ($1/\alpha $). If $\alpha $ turned out to be too large, small atomic nuclei could not exist, because the nuclear forces binding them would not be able to prevent the mutual repulsion of protons. For $\alpha >0.1 $ carbon could not exist.

Nuclear reactions in stars are especially sensitive to $\alpha $. For nuclear fusion to occur, the star's gravity must create a temperature high enough to cause the nuclei to move closer together, despite their tendency to repel each other. If $\alpha $ were greater than 0.1, then fusion would be impossible (unless, of course, other parameters, such as the ratio of electron and proton masses, remained the same). A change in $\alpha$ by just 4% would affect the energy levels in the core of carbon to such an extent that its occurrence in stars would simply cease.

Implementation of nuclear techniques

The second, more serious, experimental problem is that measuring changes in constants requires high-precision equipment, which must be extremely stable. Even with atomic clocks, the drift of the fine structure constant can only be tracked for a few years. If $\alpha $ changed by more than 4 $\cdot$ $10^(–15)$ in three years, the most accurate clock would be able to detect this. However, nothing of the kind has yet been recorded. It would seem, why not confirmation of constancy? But three years for space is an instant. Slow but significant changes in the history of the universe may go unnoticed.

LIGHT AND PERMANENT FINE STRUCTURE

Fortunately, physicists have found other ways to check. In the 1970s Scientists from the French Atomic Energy Commission noticed some features in the isotopic composition of ore from the uranium mine at Oklo in Gabon (West Africa): it resembled nuclear reactor waste. Apparently, about 2 billion years ago, a natural nuclear reactor was formed in Oklo ( see Divine Reactor, In the World of Science, No. 1, 2004).

In 1976, Alexander Shlyakhter of the Leningrad Institute of Nuclear Physics observed that the performance of natural reactors is critically dependent on the exact energy of the specific state of the samarium nucleus that captures neutrons. And the energy itself is strongly related to the value of $\alpha $. So, if the fine structure constant had been slightly different, no chain reaction could have occurred. But it really happened, which means that over the past 2 billion years the constant has not changed by more than 1 $\cdot$ $10^(–8)$. (Physicists continue to argue about exact quantitative results because of the inevitable uncertainty about conditions in a natural reactor.)

In 1962, P. James E. Peebles and Robert Dicke of Princeton University were the first to apply such an analysis to ancient meteorites: the relative abundance of isotopes resulting from their radioactive decay depends on $\alpha $. The most sensitive limitation is associated with beta decay in the conversion of rhenium to osmium. According to recent work by Keith Olive of the University of Minnesota and Maxim Pospelov of the University of Victoria in British Columbia, $\alpha$ differed from its current value by 2 $\cdot$ $10^ at the time the meteorites formed. (–6)$. This result is less accurate than the Oklo data, but it goes further back in time, to the origin of the solar system 4.6 billion years ago.

To explore possible changes over even longer periods of time, researchers must look to the heavens. Light from distant astronomical objects goes to our telescopes for billions of years and bears the imprint of the laws and world constants of those times when it just began its journey and interaction with matter.

Spectral lines

Astronomers got involved in the constants story shortly after the discovery of quasars in 1965, which had just been discovered and identified as bright light sources located at great distances from the Earth. Because the path of light from the quasar to us is so long, it inevitably crosses the gaseous neighborhoods of young galaxies. The gas absorbs quasar light at specific frequencies, imprinting a barcode of narrow lines across its spectrum (see box below).

SEARCHING FOR CHANGES IN QUASAR RADIATION

When the gas absorbs light, the electrons contained in the atoms jump from lower energy levels to higher ones. Energy levels are determined by how strongly the atomic nucleus holds electrons, which depends on the strength of the electromagnetic interaction between them and, therefore, on the fine structure constant. If it was different at the time when the light was absorbed, or in some particular region of the universe where it happened, then the energy required to move an electron to a new level, and the wavelengths of the transitions observed in the spectra, should be different from observed today in laboratory experiments. The nature of the change in wavelengths depends critically on the distribution of electrons in atomic orbits. For a given change in $\alpha$, some wavelengths decrease, while others increase. The complex pattern of effects is difficult to confuse with data calibration errors, which makes such an experiment extremely useful.

When we started work seven years ago, we faced two problems. First, the wavelengths of many spectral lines have not been measured with sufficient accuracy. Oddly enough, scientists knew much more about the spectra of quasars billions of light years away than about the spectra of terrestrial samples. We needed high-precision laboratory measurements to compare the spectra of the quasar with them, and we persuaded the experimenters to make the appropriate measurements. They were carried out by Anne Thorne and Juliet Pickering of Imperial College London, and later by teams led by Sveneric Johansson of the Lund Observatory in Sweden, and by Ulf Griesmann and Rainer Kling (Rainer Kling) from the National Institute of Standards and Technology in Maryland.

The second problem was that previous observers used so-called alkaline doublets, pairs of absorption lines that appear in atomic gases of carbon or silicon. They compared the intervals between these lines in the spectra of the quasar with laboratory measurements. However, this method did not allow one specific phenomenon to be exploited: variations in $\alpha $ cause not only a change in the interval between the energy levels of an atom relative to the level with the lowest energy (the ground state), but also a change in the position of the ground state itself. In fact, the second effect is even stronger than the first. As a result, the accuracy of observations was only 1 $\cdot$ $10^(–4)$.

In 1999, one of the authors of the paper (Web) and Victor V. Flambaum of the University of New South Wales in Australia developed a technique to take both effects into account. As a result, the sensitivity was increased by 10 times. In addition, it became possible to compare different types of atoms (for example, magnesium and iron) and conduct additional cross-checks. Complicated calculations had to be made to determine exactly how the observed wavelengths vary in different types of atoms. Armed with state-of-the-art telescopes and sensors, we decided to test the persistence of $\alpha $ with unprecedented accuracy using the new many-multiplet method.

Revision of views

When we started the experiments, we simply wanted to establish with greater accuracy that the value of the fine structure constant in ancient times was the same as it is today. To our surprise, the results obtained in 1999 showed small but statistically significant differences, which were subsequently confirmed. Using data from 128 quasar absorption lines, we recorded an increase in $\alpha$ by 6 $\cdot$ $10^(–6)$ over the past 6–12 billion years.

The results of measurements of the fine structure constant do not allow us to draw final conclusions. Some of them indicate that it was once smaller than it is now, and some are not. Perhaps α has changed in the distant past, but has now become constant. (The boxes represent the range of data.)

Bold claims require solid evidence, so our first step was to carefully review our data collection and analysis methods. Measurement errors can be divided into two types: systematic and random. With random inaccuracies, everything is simple. In each individual measurement, they take on different values, which, with a large number of measurements, are averaged and tend to zero. Systematic errors that are not averaged out are more difficult to deal with. In astronomy, uncertainties of this kind are encountered at every turn. In laboratory experiments, instruments can be tuned to minimize errors, but astronomers can't "tune" the universe, and they have to admit that all their data collection methods contain inherent biases. For example, the observed spatial distribution of galaxies is markedly biased towards bright galaxies because they are easier to observe. Identifying and neutralizing such shifts is a constant challenge for observers.

First, we drew attention to the possible distortion of the wavelength scale, relative to which the spectral lines of the quasar were measured. It could arise, for example, during the processing of the "raw" results of the observation of quasars into a calibrated spectrum. Although simple linear stretching or shrinking of the wavelength scale could not exactly mimic the change in $\alpha$, even an approximate similarity would be sufficient to explain the results. Gradually, we eliminated simple errors associated with distortions by substituting calibration data instead of the results of the quasar observation.

For more than two years, we have been investigating various causes of bias to make sure that their impact is negligible. We have found only one potential source of serious bugs. We are talking about magnesium absorption lines. Each of its three stable isotopes absorbs light with different wavelengths, which are very close to each other and are visible in the spectra of quasars as a single line. Based on laboratory measurements of the relative abundance of isotopes, researchers judge the contribution of each of them. Their distribution in the young Universe could be significantly different from today if the stars that emit magnesium were, on average, heavier than their today's counterparts. Such differences could mimic a change in $\alpha$. But the results of a study published this year indicate that the observed facts are not so easily explained. Yeshe Fenner and Brad K. Gibson of Swinburne University of Technology in Australia and Michael T. Murphy of the University of Cambridge concluded that the isotope abundance required to mimic the $\alpha$ change would also lead to an excess synthesis of nitrogen in the early Universe, which is completely inconsistent with observations. Thus, we must accept the possibility that $\alpha$ did change.

SOMETIMES IT CHANGES, SOMETIMES IT DOESN'T

According to the hypothesis put forward by the authors of the article, in some periods of cosmic history the fine structure constant remained unchanged, while in others it increased. The experimental data (see the previous inset) are consistent with this assumption.

The scientific community immediately appreciated the significance of our results. Researchers of the spectra of quasars around the world immediately took up measurements. In 2003, the research teams of Sergei Levshakov (Sergei Levshakov) from the St. Petersburg Institute of Physics and Technology. Ioffe and Ralf Quast of the University of Hamburg have studied three new quasar systems. Last year, Hum Chand and Raghunathan Srianand of the Inter-University Center for Astronomy and Astrophysics in India, Patrick Petitjean of the Institute of Astrophysics and Bastien Aracil of LERMA in Paris analyzed 23 more cases. None of the groups found changes to $\alpha$. Chand argues that any change between 6 and 10 billion years ago must be less than one millionth.

Why did similar methodologies used to analyze different source data lead to such a drastic discrepancy? The answer is not yet known. The results obtained by these researchers are of excellent quality, but the size of their samples and the age of the analyzed radiation are significantly smaller than ours. In addition, Chand used a simplified version of the multimultiplet method and did not fully evaluate all experimental and systematic errors.

The renowned astrophysicist John Bahcall of Princeton has criticized the multimultiplet method itself, but the problems he points out are in the category of random errors, which are minimized when large samples are used. Bacall, and Jeffrey Newman of the National Laboratory. Lawrence at Berkeley considered emission lines, not absorption lines. Their approach is much less precise, although it may prove useful in the future.

Legislative reform

If our results are correct, the consequences will be enormous. Until recently, all attempts to estimate what would happen to the Universe if the fine structure constant changed were unsatisfactory. They did not go further than considering $\alpha$ as a variable in the same formulas that were obtained under the assumption that it is constant. Agree, a very dubious approach. If $\alpha $ changes, then the energy and momentum in the effects associated with it should be conserved, which should affect the gravitational field in the Universe. In 1982, Jacob D. Bekenstein of the Hebrew University of Jerusalem first generalized the laws of electromagnetism to the case of non-constant constants. In his theory, $\alpha $ is considered as a dynamic component of nature, i.e. like a scalar field. Four years ago, one of us (Barrow), along with Håvard Sandvik and João Magueijo of Imperial College London, expanded Bekenstein's theory to include gravity.

The predictions of the generalized theory are enticingly simple. Since electromagnetism on a cosmic scale is much weaker than gravity, changes in $\alpha$ by a few millionths do not have a noticeable effect on the expansion of the Universe. But the expansion significantly affects $\alpha $ due to the discrepancy between the energies of the electric and magnetic fields. During the first tens of thousands of years of cosmic history, radiation dominated charged particles and maintained a balance between electric and magnetic fields. As the universe expanded, radiation became rarefied, and matter became the dominant element of the cosmos. The electric and magnetic energies turned out to be unequal, and $\alpha $ began to increase in proportion to the logarithm of time. Approximately 6 billion years ago, dark energy began to dominate, accelerating the expansion, which makes it difficult for all physical interactions to propagate in free space. As a result, $\alpha$ became almost constant again.

The described picture is consistent with our observations. The spectral lines of the quasar characterize that period of cosmic history when matter dominated and $\alpha$ increased. The results of laboratory measurements and studies in Oklo correspond to the period when dark energy dominates and $\alpha$ is constant. Of particular interest is the further study of the influence of the change in $\alpha$ on the radioactive elements in meteorites, because it allows us to study the transition between the two named periods.

Alpha is just the beginning

If the fine structure constant changes, then the material objects must fall differently. At one time, Galileo formulated the weak equivalence principle, according to which bodies in a vacuum fall at the same speed, regardless of what they are made of. But changes in $\alpha$ must generate a force acting on all charged particles. The more protons an atom contains in its nucleus, the stronger it will feel it. If the conclusions drawn from the analysis of the results of quasar observations are correct, then the acceleration of free fall of bodies made of different materials should differ by about 1 $\cdot$ $10^(–14)$. This is 100 times smaller than what can be measured in the lab, but large enough to show differences in experiments such as STEP (Testing the Equivalence Principle in Space).

In previous studies of $\alpha $, scientists neglected the inhomogeneity of the Universe. Like all galaxies, our Milky Way is about a million times denser than outer space on average, so it is not expanding with the universe. In 2003, Barrow and David F. Mota of Cambridge calculated that $\alpha$ could behave differently within a galaxy than in emptyer regions of space. As soon as a young galaxy condenses and, while relaxing, comes into gravitational equilibrium, $\alpha$ becomes constant inside the galaxy, but continues to change outside. Thus, experiments on Earth that test for the persistence of $\alpha$ suffer from a biased selection of conditions. We have yet to figure out how this affects the verification of the weak equivalence principle. No spatial variations of $\alpha$ have yet been observed. Relying on the homogeneity of the CMB, Barrow recently showed that $\alpha $ does not vary by more than 1 $\cdot$ $10^(–8)$ between regions of the celestial sphere spaced by $10^o$.

It remains for us to wait for the emergence of new data and new studies that will finally confirm or refute the hypothesis about the change in $\alpha $. The researchers focused on this constant, simply because the effects due to its variations are easier to see. But if $\alpha$ is truly mutable, then other constants must change too. In this case, we will have to admit that the internal mechanisms of nature are much more complicated than we thought.

ABOUT THE AUTHORS:
John Barrow (John D. Barrow) , John Web (John K. Webb) engaged in the study of physical constants in 1996 during a joint sabbatical at the University of Sussex in England. Then Barrow explored new theoretical possibilities for changing constants, and Web was engaged in observations of quasars. Both authors write non-fiction books and often appear on television programs.

There are new confirmations that one of the most important constants of modern physics changes over time - and in different parts of the Universe in different ways.

http://www.popmech.ru/images/upload/article/const_1_1283782005_full.jpg

A quasar is a point source of radiation characterized by extremely high intensity and variability. According to modern theories, quasars are the active centers of young galaxies with black holes located in their centers, which absorb matter with a special appetite. Why is the Universe the way it is? Why are the numerical ratios of dimensionless constants exactly the way we know them? Why does space have three extended dimensions? Why are there exactly fundamental interactions, and not, say, five? Why, finally, is everything in it so balanced and precisely “fitted” one under the other? Today it is popular to believe that if something were different, if one of the basic constants were different, we simply could not ask these questions. This approach is called the anthropic principle: if the constants were related differently, stable elementary particles could not be formed, if space had more dimensions, the planets could not acquire stable orbits, and so on. In other words, the Universe could not have been formed - and even more so, intelligent organisms like you and me could not have developed. (More about the anthropic principle is described in the article “The Humanitarian Universe.”) In general, we just appeared in the right place - in the only one where we could appear. And perhaps at the right time, as evidenced by a recent high-profile study of one of the fundamental physical constants. We are talking about the fine structure constant, a dimensionless quantity that cannot be derived from any formulas. It is established empirically as the ratio of the rotation speed of an electron (located on the Bohr radius) to the speed of light, and is equal to 1/137.036. It characterizes the force of interaction of electric charges with photons. Despite the fact that it is called constant, physicists have been debating for decades about how constant this constant really is. Its somewhat "corrected" value for different cases could solve certain problems in modern cosmology and astrophysics. And with the emergence of String Theory on the scene, many scientists generally tend to think that other constants may not be so constant. Changes in the fine structure constant could indirectly indicate the real existence of additional folded dimensions of the Universe, which is absolutely necessary in String Theory. All this spurred the search for evidence - or refutations - that the fine structure constant may be different at other points in space and (or) time. Fortunately, in order to evaluate it, you can use such an accessible tool as spectroscopy (the fine structure constant was just introduced to interpret spectroscopic observations), and in order to “look into the past”, it is enough to look at distant stars. At first, experiments seemed to disprove the possibility of changes in this constant, but as the instruments became more sophisticated, it was possible to estimate its value at greater distances and with greater accuracy, more interesting evidence began to appear. In 1999, for example, Australian astronomers led by John Webb (John Webb) analyzed the spectra of 128 distant quasars and showed that some of their parameters can be explained by a gradual increase in the fine structure constant over the past 10-12 billion years. However, these results have been highly controversial. Let's say a work dating back to 2004, on the other hand, showed no noticeable changes. And just the other day, the same John Webb made a new sensational report - his new work was called by some experts the “discovery of the year” in physics. Earlier, in the late 1990s, Webb and colleagues worked with the Keck Observatory in Hawaii and observed quasars in the northern celestial hemisphere. Then they came to the conclusion that 10 billion years ago, the fine structure constant was about 0.0001 less and has “grown up” a little since then. Now, having worked with the VLT telescope of the ESO Observatory in Chile and observed 153 quasars of the southern hemisphere, they got the same results, but ... with the opposite sign. The fine structure constant "to the south" was 0.0001 higher 10 billion years ago and has "decreased" since then. These differences, called the "Australian dipole" by researchers, are highly statistically significant. And most importantly, they can testify to the fundamental asymmetry of our universe, which can be observed both in space and in time. Returning to the anthropic principle with which we started, we can say that we were born not only in an ideal place, but also at an ideal time.

According to Physics World


The named fundamental constant of the microworld: α ≈ 1/137 was introduced into physics in the 1920s by Arnold Sommerfeld to describe the energy sublevels found experimentally in the emission spectra of atoms. Since then, many other manifestations of the same constant ratio have been revealed in various phenomena associated with the interactions of elementary particles. Leading physicists of that time gradually realized the significance of this number, both in the world of elementary particles and in general - in the structure of our universe. From this point of view, it is enough to say that all the main properties and characteristics of microworld objects: the size of electron orbits in atoms, the binding energies (both between elementary particles and atoms), and thus, all the physical and chemical properties of matter, are determined by the value of this constants. In the future, using the named constant, it was possible to develop a very effective formal theory - modern quantum electrodynamics (QED), which describes the quantum electromagnetic interaction with fantastic accuracy.

From the foregoing, one can judge the importance of the task of clarifying the physical meaning and the causal mechanism for the emergence of this constant, which has been an open question in physics since it was discovered. In the language of theorists, the solution of this problem means: to name the initial concept of the emergence of the named constant, based on which, by successive calculations, one can come to its experimentally established value. The significance of the question posed can be judged from the joking statement of the famous world-famous physicist, Wolfgang Pauli: “When I die, the first thing I consider asking the devil is what is the meaning of the fine structure constant?” Well, Richard Feynman considered the very fact of the existence of this mysterious number "a curse for all physicists" and advised good theorists to "hack it on the wall and always think about it"!

The presented question has acquired such significance, first of all, because the named constant is directly related to the problem of understanding the physical essence of elementary particles, since it does not appear separately from them, but as their deep property. Therefore, many physicists have been trying hard for many years to solve this greatest problem, using different approaches and methods. But so far all their efforts have been unsuccessful.

What is proposed by the author? He was able to discover that the solution to the "mystery of the 20th century" is actually contained in our textbooks and in well-known formulas related to waves, if only carefully calculated! This means that α is a classical wave constant. But we must warn you that the simplest explanation of a riddle can be perplexing if we are not initially inclined to listen to what is offered to us. As experience has shown, the presented solution to the problem is very difficult to perceive by many specialists, although no one refutes the correctness of the result!

What is the reason for this difficulty? Unfortunately, the leading modern theorists, overly carried away by formal mathematical theories (which were initially considered as a temporary compromise option), have already forgotten about the existence of the unresolved fundamental dilemma "particles - waves" in physics. As a result, it is difficult to meet a physicist who would not be surprised by the author's approach - to represent a particle as a localized standing wave (although officially this is quite acceptable, due to the same unresolved dilemma). And this despite the fact that the undisputed authorities of physical science have long come to a similar conclusion: Einstein, Schrödinger, Heisenberg and others under the pressure of weighty arguments.

The presented work and the result obtained, in the opinion of the author, can be a serious indication of the correctness of the convictions of the luminaries of physics. But this conclusion was once stubbornly ignored by the majority of the votes of colleagues (since it was not possible to obtain the necessary results confirming the correctness of this conclusion). As a consequence, research in this area of ​​theoretical physics went in an inefficient direction. The proposed solution may be the key to revealing the physical essence of elementary particles and thereby open a clear path to the description of the microworld, alternative to modern formal phenomenological theories. However, the decisive word here belongs to deep-thinking experts - theorists, who, we hope, will certainly be found and give an objective assessment of the work presented.

It has been found that the fine structure constant, denoted by the Greek letter α, has changed in space and time since the Big Bang. This discovery has already been called "news of the year in physics" by specialists who did not participate in the work. If this fact is true, then this would mean a violation of the fundamental principle of Einstein's general theory of relativity.

At the same time, the nature of the asymmetry of the fine structure constant can help scientists create one unified theory of physics that describes the four fundamental interactions (gravity, electromagnetism, and strong and weak nuclear forces), as well as better understand the nature of our Universe.

The fine structure constant α is dimensionless, approximately equal to 1/137. It was first described in 1916 by the German physicist Arnold Sommerfeld. He interpreted it as the ratio of the speed of an electron in the first circular orbit in the Bohr model of the atom (this is the simplest model of the atom, in which electrons move around a positively charged nucleus, like planets around the Sun) to the speed of light. In quantum electrodynamics, the fine structure constant characterizes the strength of the interaction between electric charges and photons. Its value cannot be predicted theoretically and is introduced on the basis of experimental data. The fine structure constant is one of the twenty odd "outside parameters" of the Standard Model in particle physics, and there have been some theoretical indications that it might change.

John Webb, Victor Flambaum and their colleagues at the University of New South Wales have been looking for signs of a change in α since 1998, studying the radiation of distant quasars. This radiation traveled for billions of years to the Earth through clouds of gas. Part of it was absorbed at certain wavelengths, from which one can draw conclusions about the chemical composition of clouds and from this already determine what the fine structure constant was billions of years ago. According to Australian researchers who studied objects in the northern hemisphere, this value used to be 1/100,000 less than it is now. This result, obtained several years ago, was not accepted by all physicists.

By analyzing 153 quasars in southern hemisphere skies with the VLT in Chile, scientists found that the fine structure constant billions of years ago was 1/100,000 greater than it is now.

This asymmetry, called the "Australian dipole", is determined with an accuracy of 4 sigma, which means that there is only one chance in fifteen thousand that this result is wrong. The spatial variation of α is evidence that the electromagnetic interaction violates Einstein's equivalence principle, according to which the fine structure constant must be the same, no matter where and when it is measured.

Wim Ubachs, a spectroscopist from the University of Amsterdam (Netherlands), called the work of Australian physicists "news of the year in physics" and added that it gives "a new twist to the problem."

The fine structure constant and other fundamental parameters are determined by the masses and energies of elementary particles, including those that make up dark matter. If these constants change, the ratio of the abundance of normal matter, dark matter, and dark energy can be different in different parts of the universe. This could be seen as an additional anisotropy in the cosmic microwave background, or an asymmetry in the expansion rate of the universe.

The most intriguing aspect of this discovery is related to the so-called "anthropic principle", which reads as follows: "We see the Universe as it is, because only in such a Universe could an observer, a person, have arisen." That is, it follows from the anthropic principle that fundamental constants have values ​​that allow matter and energy to be in the form of stars, planets, and our own bodies. If α changes over time and space, it is possible that we owe our existence to a special place and time in the universe.