A new theory of the origin of the universe has been proposed. A new theory of the evolution of the universe is proposed A new theory of the environment of the universe

New elementary particles can no longer be detected. Also, an alternative scenario allows solving the problem of mass hierarchy. The study is published on arXiv.org.


© Diomedia

The theory is called Nnaturalness. It is defined on energy scales of the order of the electroweak interaction, after separating the electromagnetic and weak interactions. This was about ten at minus thirty-two - ten at minus twelfth seconds after the Big Bang. Then, according to the authors of the new concept, in the Universe there existed a hypothetical elementary particle - a rechiton (or reheaton, from the English reheaton), the decay of which led to the formation of the physics observed today.

As the Universe became colder (the temperature of matter and radiation decreased) and flatter (the geometry of space approached Euclidean), the rechiton broke up into many other particles. They formed almost non-interacting groups of particles, almost identical in terms of species, but differing in the mass of the Higgs boson, and hence their own masses.

The number of such groups of particles, which, according to scientists, exist in the modern Universe, reaches several thousand trillion. One of these families includes both the physics described by the Standard Model (SM) and the particles and interactions observed in experiments at the LHC. The new theory makes it possible to abandon supersymmetry, which is still unsuccessfully sought, and solves the problem of particle hierarchy.

In particular, if the mass of the Higgs boson formed as a result of the decay of the rechiton is small, then the mass of the remaining particles will be large, and vice versa. This is what solves the problem of the electroweak hierarchy associated with a large gap between the experimentally observed masses of elementary particles and the energy scales of the early Universe. For example, the question of why an electron with a mass of 0.5 megaelectronvolts is almost 200 times lighter than a muon with the same quantum numbers disappears by itself - there are exactly the same sets of particles in the Universe where this difference is not so strong.

According to the new theory, the Higgs boson observed in experiments at the LHC is the lightest particle of this type, formed as a result of the decay of a rechiton. Other groups of yet undiscovered particles are associated with heavier bosons - analogues of the currently discovered and well-studied leptons (not participating in the strong interaction) and hadrons (participating in the strong interaction).



© EP Department / CERN

The new theory does not cancel, but makes it not so necessary the introduction of supersymmetry, which implies doubling (at least) the number of known elementary particles due to the presence of superpartners. For example, for a photon - photino, quark - squark, higgs - higgsino, and so on. The spin of the superpartners must differ by a half-integer from the spin of the original particle.

Mathematically, a particle and a superparticle are combined into one system (supermultiplet); all quantum parameters and masses of particles and their partners in exact supersymmetry coincide. It is believed that supersymmetry is broken in nature, and therefore the mass of superpartners significantly exceeds the mass of their particles. To detect supersymmetric particles, powerful accelerators like the LHC were needed.

If supersymmetry or any new particles or interactions exist, the authors of the new study believe they could be discovered on scales of ten teraelectronvolts. This is almost at the limit of the LHC's capabilities, and if the proposed theory is correct, the discovery of new particles there is extremely unlikely.



© arXiv.org

A signal near 750 gigaelectronvolts, which could indicate the decay of a heavy particle into two gamma photons, as scientists from the CMS (Compact Muon Solenoid) and ATLAS (A Toroidal LHC ApparatuS) collaborations working at the LHC reported in December 2015 and March 2016 , is recognized as statistical noise. Since 2012, when the discovery of the Higgs boson at CERN became known, no new fundamental particles predicted by SM extensions have been identified.

Canadian and American scientist of Iranian origin Nima Arkani-Hamed, who proposed a new theory, received the Fundamental Physics Prize in 2012. The award was established in the same year by Russian businessman Yuri Milner.

Therefore, the emergence of theories in which the need for supersymmetry disappears is expected. “There are many theorists, myself included, who believe that this is a completely unique time when we are solving important and systemic questions, and not about the details of any next elementary particle,” said the lead author of the new study, a physicist from Princeton University ( USA).

Not everyone shares his optimism. So, physicist Matt Strassler from Harvard University believes the mathematical justification of the new theory is far-fetched. Meanwhile, Paddy Fox from the Enrico Fermi National Accelerator Laboratory in Batavia (USA) believes that the new theory will be tested in the next ten years. In his opinion, particles formed in a group with any heavy Higgs boson should leave their traces on the CMB - the ancient microwave radiation predicted by the Big Bang theory.

Looking at a work of art, a beautiful landscape or a child, a person always feels the harmony of being.

In scientific terms, this feeling that tells us that everything in the universe is harmonious and interconnected is called non-local coherence. According to Erwin Laszlo, in order to explain the presence of a significant number of particles in the Universe and the continuous, but by no means uniform and linear evolution of everything that exists, we must recognize the presence of a factor that is neither matter nor energy.

The importance of this factor is now recognized not only in the social and human sciences, but also in physics and natural science. This is information - information as a real and effective factor that sets the parameters of the Universe at its birth, and subsequently controls the evolution of its basic elements that turn into complex systems.

And now, relying on the data of the new cosmology, we have finally come close to realizing the dream of every scientist - the creation of a holistic theory of everything.

Creating a holistic theory of everything

In the first chapter we will discuss the problem of creating a theory of everything. A theory that deserves this name must truly be a theory of everything—a holistic theory of everything we observe, experience, and encounter, whether they be physical objects, living beings, social and ecological phenomena, or creations of the mind and consciousness. It is possible to create such a holistic theory of everything - and this will be shown in this and subsequent chapters.

There are many ways to comprehend the world: through our own ideas, mystical intuition, art and poetry, as well as through the belief systems of world religions. Of the many methods available to us, one deserves special attention, as it is based on reproducible experience, strictly adheres to the methodology, and is open to criticism and reassessment. This is the way of science.

Science matters. It matters not only because it is a source of new technologies that change our lives and the world around us, but also because it gives us a reliable view of the world and of us in this world.

But the view of the world through the prism of modern science is ambiguous. Until recently, science has painted a fragmented image of the world, which was composed of seemingly independent disciplines. It is hard for scientists to say what connects the physical Universe and the living world, the living world and the world of society, the world of society with the spheres of mind and consciousness. Now the situation is changing; At the forefront of science, more and more researchers are striving to obtain a more holistic, unified picture of the world. First of all, this concerns physicists who are working on the creation of unified theories and large unified theories. These theories link together the fundamental fields and forces of nature in a coherent theoretical framework, suggesting that they have a common origin.

A particularly promising trend has emerged in recent years in quantum physics: an attempt to create a theory of everything. This project is based on string and superstring theories (so called because these theories treat elementary particles as vibrating filaments or strings). Developed theories of everything use complex mathematical and multidimensional spaces in order to create one master equation that could explain all the laws of the universe.

Physical theories of everything

The theories of everything currently being developed by theoretical physicists aim to achieve what Einstein once called "reading the mind of God." He said that if we could combine all the laws of physical nature and create a coherent system of equations, we would be able to explain all the characteristics of the universe on the basis of these equations, which would be tantamount to reading the mind of God.

Einstein made his own attempt of this kind in the form of a unified field theory. Although he continued his efforts until his death in 1955, he did not discover a simple and powerful equation that could explain all physical phenomena in a logical and coherent way.

Einstein went to his goal, considering all physical phenomena as the result of the interaction of fields. We now know that he failed because he did not take into account the fields and forces that operate at the microphysical level of reality. These fields (weak and strong nuclear forces) occupy a central position in quantum mechanics, but not in the theory of relativity.

Today, most theoretical physicists take a different approach: they consider the quantum, a discrete aspect of physical reality, to be the elementary unit. But the physical nature of quanta has been revised: they are considered not separate matter-energy particles, but vibrating one-dimensional threads - strings and superstrings. Physicists are trying to represent all the laws of physics as the vibration of superstrings in a multidimensional space. They see each particle as a string that creates its own "music" along with all other particles. On a cosmic level, entire stars and galaxies vibrate together, as well as entire universes. The task of physicists is to create an equation that will show how one vibration relates to another so that they can all be expressed in one super equation. This equation would decipher the music, which embodies the most boundless and fundamental harmony of the cosmos.

At the time of this writing, string theory-based theories of everything are still ambitious ideas: no one has ever created a super-equation that expresses the harmony of the physical universe in a formula as simple as Einstein's E = mc2. In fact, there are so many problems in this area that more and more physicists are suggesting that a new concept will be required to make progress. String theory equations require multiple dimensions, four-dimensional space-time is not enough.

The theory originally required 12 dimensions in order to link all the vibrations into a single theory, but now it is believed that "only" 10 or 11 dimensions are sufficient, provided that the vibrations occur in a more multidimensional "hyperspace". Moreover, string theory requires the existence of space and time for its strings, but cannot show how time and space could have come about. And, finally, it is confusing that this theory has so many possible solutions - about 10,500 - that it becomes completely incomprehensible why our Universe is the way it is (even though each solution leads to a different Universe).

Physicists seeking to save string theory put forward various hypotheses. For example, all possible universes coexist, although we live in only one of them. Or maybe our universe has many facets, but we perceive only one familiar to us. Here are some hypotheses put forward by theoretical physicists who seek to show that string theories have some degree of realism. But none of them is satisfactory, and some critics, including Peter Voight and Lee Smolin, are ready to bury string theory.

Smolin is one of the founders of the theory of loop quantum gravity, according to which space is a network of cells that connects all points. The theory explains how space and time came into existence, and it also explains "action at a distance," that is, the strange "relationship" that underlies the phenomenon known as nonlocality. We will explore this phenomenon in more detail in Chapter 3.

It is not known whether physicists will be able to create a working theory of everything. It is clear, however, that even if the efforts made are successful, the creation of a real theory of everything will not in itself mean success. At best, physicists will create a physical theory of everything - a theory that will not be a theory of everything, but only a theory of all physical objects. A true theory of everything will include more than just the mathematical formulas that express the phenomena studied by this area of ​​quantum physics. There are not only vibrating strings and quantum events associated with them in the Universe. Life, mind, culture and consciousness are part of the reality of the world, and a true theory of everything will take them into account as well.

Ken Wilber, author of The Theory of Everything, agrees. He speaks of a "holistic vision" embodied in a true theory of everything. However, he does not offer such a theory, but mainly discusses what it could be and describes it in terms of the evolution of culture and consciousness in relation to their own theories. A holistic theory of everything that has scientific foundations has yet to be created.

Approaches to a true theory of everything

A true theory of everything can be created. Although it goes beyond string and superstring theories, in which physicists try to develop their own supertheory, it fits well within the framework of science itself. Indeed, the task of creating a true holistic theory of everything is easier than the task of creating a physical theory of everything. As we can see, the physical theories of everything tend to reduce the laws of physics to a single formula - all those laws that govern the interaction of particles and atoms, stars and galaxies; many complex entities with complex interactions. It is easier and more reasonable to look for the basic laws and processes that give rise to these entities and their interactions.

Computer modeling of complex structures shows that the complex is created and can be explained by basic and relatively simple initial conditions. As John von Neumann's theory of cellular automata showed, it is enough to define the main components of the system and set the rules - algorithms - that govern their behavior (this is the basis of all computer models: the developers tell the computer what to do at each stage of the modeling process, and the computer does the rest). A limited and unexpectedly simple set of basic elements driven by a small number of algorithms can create seemingly incomprehensible complexity if the process is allowed to unfold over time. A set of rules that carry information for elements starts a process that orders and organizes elements, which are thus able to create increasingly complex structures and relationships.

In trying to create a true holistic theory of everything, we can follow a similar path. We can start with elementary things—things that give rise to other things without being generated by them. Then we must define a simple set of rules that will create something more complex. Basically, we should then be able to explain how every "thing" in the world came into being.

In addition to string and superstring theories, there are theories and concepts in the new physics, thanks to which this grandiose idea can be realized. Using discoveries in the cutting-edge fields of particle and field theory, we can identify the basis that generates everything without being itself generated by something. This basis, as we shall see, is a sea of ​​virtual energy known as the quantum vacuum. We can also refer to the set of rules (laws of nature) that tell us how the basic elements of reality - particles known as quanta - when interacting with their cosmic basis turn into complex things.

However, we must add a new element to get a true holistic theory of everything. The currently known laws according to which the existing objects of the world arise from the quantum vacuum are the laws of interaction based on the transfer and transformation of energy. These laws turned out to be enough to explain how real objects - in the form of particle-antiparticle pairs - are created in and emerge from the quantum vacuum. But they don't provide an explanation for why more particles were created in the Big Bang than antiparticles; and also how, over billions of years, the particles that survived were combined into more and more complex structures: into galaxies and stars, atoms and molecules, and (on suitable planets) into macromolecules, cells, organisms, societies, ecological niches and entire biospheres.

To explain the presence of a significant number of particles in the Universe (“matter” as opposed to “antimatter”) and the continuous, but by no means uniform and linear evolution of everything that exists, we must recognize the presence of a factor that is neither matter nor energy. The importance of this factor is now recognized not only in the social and human sciences, but also in physics and the natural sciences. This is information - information as a real and effective factor that sets the parameters of the Universe at its birth, and subsequently controls the evolution of its basic elements that turn into complex systems.

Most of us understand information as data or what is known to a person. The physical and natural sciences are discovering that information goes far beyond the boundaries of the consciousness of an individual person and even all people combined.

Information is an integral aspect of both physical and biological nature. The great physicist David Bohm called information a process that affects the recipient, "shaping" him. We will accept this concept.

Informing is not a human product, not something we create when we write, count, speak and communicate. The sages of antiquity have known for a long time, and modern scientists will learn it again, that information is present in the world regardless of human will and actions and is a determining factor in the evolution of everything that fills the real world. The basis for creating a true theory of everything is the recognition that information is a fundamental factor in nature.

About riddles and myths

Driving Forces for the Coming Paradigm Shift in Science

We will begin our search for a true holistic theory of everything by looking at the factors that bring science closer to a paradigm shift. The key factors are the mysteries that emerge and accumulate in the course of scientific inquiry: anomalies that the current paradigm cannot explain. This pushes the scientific community to search for new approaches to anomalous phenomena. Such research efforts (we will call them "scientific myths") contain many ideas. Some of these ideas may contain key concepts that will lead scientists to a new paradigm - a paradigm that can clear up mysteries and anomalies and serve as the basis for a true holistic theory of everything.

Leading scientists seek to expand and deepen their understanding of the studied segment of reality. They understand more and more about the relevant part or aspect of reality, but they cannot study this part or aspect directly - they can only comprehend it through concepts turned into hypotheses and theories. Concepts, hypotheses and theories are not strong enough, they can be wrong. In fact, the hallmark of a truly scientific theory (according to the philosopher of science Sir Karl Popper) is refutation. Theories are falsified when the predictions made from them are not confirmed by observations. In this case, the observations are anomalous, and the theory under consideration is either considered erroneous and rejected, or needs to be revised.

The refutation of theories is the engine of real scientific progress. When everything works, there may be progress, but it is partial (refining an existing theory to fit new observations). Real progress occurs when that is not possible. Sooner or later there comes a moment when, instead of trying to revise existing theories, scientists prefer to start looking for a simpler and more explaining theory. The way is opened for a fundamental renewal of theory: a paradigm shift.

A paradigm shift is triggered by the accumulation of observations that do not fit into accepted theories and cannot fit into them after a simple refinement of such theories. The stage of emergence of a new and more acceptable scientific paradigm is coming. The challenge is to find fundamental new concepts that will form the basis of a new paradigm.

There are strict requirements for the scientific paradigm. A theory based on it should allow scientists to explain all the discoveries that the previous theory could explain, as well as anomalous observations. It should unite all relevant facts into a simpler and at the same time more complete concept. This is exactly what Einstein did at the turn of the 20th century when he stopped looking for the causes of the strange behavior of light within the framework of Newtonian physics and instead created a new concept of physical reality - the theory of relativity. As he himself said, you cannot solve a problem at the same level at which it arose. In an unexpectedly short time, the physics community abandoned the classical physics founded by Newton, and Einstein's revolutionary concept took its place.

In the first decade of the 20th century, science experienced a paradigm shift. Now, in the first decade of the 21st century, mysteries and anomalies are piling up again, and the scientific community is facing the next paradigm shift as fundamental and revolutionary as the transition from Newton's mechanistic world to Einstein's relative universe.

A modern paradigm shift has been brewing in cutting-edge academia for some time now. Scientific revolutions are not instantaneous processes where a new theory immediately takes its place. They can be quick, as in the case of Einstein's theory, or more extended in time, such as the transition from classical Darwin's theory to the broader biological concepts of post-Darwinism.

Before the beginning revolutions lead to the final result, the sciences in which there are anomalies go through a period of instability. Mainstream scientists defend existing theories, while freethinking scientists in cutting-edge fields explore alternatives. The latter put forward new ideas that offer a different look at the phenomena familiar to traditional scientists. For some time, alternative concepts that initially exist in the form of working hypotheses seem, if not fantastic, then strange.

They sometimes resemble myths invented by imaginative explorers. However, they are not. The "myths" of serious researchers are based on carefully calibrated logic; they combine what is already known about the segment of the world that a particular discipline explores with what is still baffling. These are not ordinary myths, they are "scientific myths" - elaborate hypotheses that are open to testing and therefore can be confirmed or refuted by observation and experiment.

Studying the anomalies that are found in observations and experiments, and concocting testable myths that can explain them, are major components of fundamental scientific research. If anomalies continue to exist despite the best efforts of scientists who adhere to the old paradigm, and if this or that scientific myth put forward by freethinking scientists offers a simpler and more logical explanation, a critical mass of scientists (mostly young ones) ceases to adhere to the old paradigm. This is how the paradigm shift begins. The concept, which until now has been a myth, is beginning to be considered a reliable scientific theory.

There are countless examples of both successful and failed myths in the history of science. Confirmed myths—considered reliable though not entirely true scientific theories—include Charles Darwin's suggestion that all living species are descended from common ancestors, and Alan Guth and Andrew Linde's hypothesis that the universe came into existence in the super-rapid "expansion" that followed its birth. during the Big Bang. Failed myths (those that offered less than accurate or better explanations for the relevant phenomena) include Hans Driesch's idea that the evolution of life follows a predetermined plan in a purpose-driven process called entelechy, and Einstein's hypothesis that an additional physical force, called a cosmological constant, does not allows the universe to perish due to the force of gravity. (Interestingly, as we will learn, some of these propositions are now being questioned: it is possible that Guth and Linde's theory of expansion will be replaced by a broader concept of a cyclic universe, and Einstein's cosmological constant was still not erroneous ...)

Examples of modern scientific myths

Here are three working hypotheses - "scientific myths" - put forward by highly respected scientists. All three, while seemingly incredible, have received some serious attention from the scientific community.

10100 universes

In 1955, physicist Hugh Everett offered a startling explanation for the quantum world (which later became the basis for one of Michael Crichton's most popular novels, Time's Arrow). Everett's parallel universe hypothesis is related to a mysterious discovery in quantum physics: until a particle is observed, measured, or manipulated in any way, it is in a curious state, which is a superposition of all possible states. However, when the particle is observed, measured, or acted on, this state of superposition disappears: the particle is in a single state, like any "ordinary" object. Since the superposition state is described as a complex wave function associated with Erwin Schrödinger's name, when the superposition state disappears, the Schrödinger wave function is said to collapse.

The problem is that it is impossible to tell which of the many possible virtual states a particle will take. The choice of particle seems unpredictable—completely independent of the conditions that trigger the collapse of the wavefunction. According to Everett's hypothesis, the indeterminacy of the collapse of the wave function does not reflect the conditions existing in the world. There is no uncertainty here: each virtual state chosen by the particle is certain - it is simply present in the world by itself!

Here's how the collapse happens: when a quantum is measured, there are a number of possibilities, each of which is associated with an observer or measuring device. We perceive only one of the possibilities in a seemingly random selection process. But, according to Everett, the choice is not random, since this choice does not occur: all possible states of a quantum are realized every time it is measured or observed; they simply
are not realized in one world. Many possible quantum states are realized in the same number of universes.
Suppose that when a quantum such as an electron is measured, there is a fifty percent chance that it will go up, and an equal chance that it will go down. Then we have not one Universe in which a quantum can go up or down with a probability of 50 to 50, but two parallel ones. In one of the universes, the electron actually moves up, and in the other, it goes down. In each of these universes there is also an observer or measuring instrument. Two outcomes exist simultaneously in two universes, just like observers or measuring instruments.

Of course, when a particle's multiple superposition states converge into one, there are not only two, but more possible virtual states that the particle can take on. Thus, there must be many universes, perhaps about 10100, in each of which there are observers and measuring instruments.

Universe created by the observer

If there are 10100 or even 10500 universes (despite the fact that in most of them life could never have arisen), how is it that we live in such a Universe where there are complex forms of life? Could this be a mere coincidence? Many scientific myths are devoted to this issue, including the anthropic cosmological principle, which claims that our observation of this universe is related to such a happy coincidence. Recently Stephen Hawking of Cambridge and Thomas Hertog of CERN (European Organization for Nuclear Research) came up with a mathematical answer. According to their observer-created theory of the universe, not separate universes branch off in time and exist on their own (as string theory suggests), but all possible universes exist simultaneously in a state of superposition. Our existence in this universe chooses the path that leads to just such a universe, among all other paths leading to all other universes; all other paths are excluded. Thus, in this theory, the causal chain of events is reversed: the present determines the past. This would not be possible if the universe had a certain initial state, because from a certain state a certain history would be born. But, Hawking and Hertog argue, the universe has no initial definite state, no reference point - such a boundary simply does not exist.

Holographic Universe

This scientific myth claims that the universe is a hologram (or at least can be considered as such). (In a hologram, which we will discuss in more detail a little later, a two-dimensional model creates a picture in three dimensions.) It is believed that all the information that makes up the Universe is located on its periphery, which is a two-dimensional surface. This two-dimensional information originates within the universe in three dimensions. We see the universe as three-dimensional, even though something that makes it what it is is a two-dimensional field of information. Why has this seemingly absurd idea become a topic of controversy and research?

The problem that the theory of the holographic universe eliminates belongs to the field of thermodynamics. According to her firmly established second law, the level of chaos can never decrease in a closed system. This means that the level of chaos can never decrease in the universe as a whole because, if we consider the cosmos in its entirety, it is a closed system (there is no outside and, therefore, nothing can become open). That the level of chaos cannot decrease means that the order that can be represented as information cannot increase. According to quantum theory, the information that creates or maintains order must be constant, it cannot become more or less.

But what happens to information when matter disappears into black holes? It may seem that black holes destroy the information contained in matter. This, however, defies quantum theory. To solve this mystery, Stephen Hawking, together with Jacob Bekenstein, then at Princeton University, deduced that the chaos in a black hole is proportional to its surface area. There is much more room for order and information inside a black hole than on the surface. In one cubic centimeter, for example, there is room for 1099 Planck volumes and only 1066 bits of information on the surface (a Planck volume is an almost incomprehensibly small space bounded by sides of 10-35 meters). Leonard Susskind of Stanford and Gerard 't Hooft of Utrech University have proposed that information inside a black hole is not lost but is holographically stored on its surface.

Mathematics found an unexpected use for holograms in 1998, when Juan Maldacena, then at Harvard University, tried to work with string theory in quantum gravity. Maldacena found that strings are easier to work with in 5D than in 4D. (We perceive space in three dimensions: two planes along the surface and one vertical. The fourth dimension would be perpendicular to these three, but it cannot be perceived. Mathematicians can add any number of dimensions, moving further and further away from the perceived world.) The solution seemed obvious: suppose that the five-dimensional space inside a black hole is actually a hologram of the four-dimensional space on its surface. Then it is possible to make relatively easy calculations in five dimensions, working with four-dimensional space.

Is the method of reducing the number of dimensions suitable for the Universe as a whole? As we have seen, string theorists struggle with many extra dimensions, finding that three-dimensional space is not enough to accomplish their task: to tie the vibrations of the various strings in the universe into a single equation. The holographic principle could help, as the universe could be thought of as a multi-dimensional hologram stored in fewer dimensions at its periphery.

The holographic principle could make string theory easier to calculate, but it carries fantastic assumptions about the nature of the world. Even Gerard ‘t Hooft, who was one of the founders of this principle, no longer considers it indisputable. He said that in this context, holography is not a principle, but a problem. Perhaps, he suggested, quantum gravity could be derived from a more fundamental principle that does not obey the laws of quantum mechanics.

In times of scientific revolution, when the existing paradigm is under pressure, new scientific myths are put forward, but not all of them are confirmed. Theorists have become entrenched in the belief that, as Galileo said, "the book of nature is written in the language of mathematics" and have forgotten that not everything in the language of mathematics exists in the book of nature. As a result, many mathematically designed myths remain just myths. Others, however, carry the seeds of significant scientific progress. Initially, no one knows for sure which of the seeds will germinate and bear fruit. The field is seething, being in a state of creative chaos.

This is the state of affairs today in many scientific disciplines. Anomalous phenomena are multiplying in physical cosmology, quantum physics, evolutionary and quantum biology, and the new field of consciousness research. They create more and more uncertainty and force open scientists to push the boundaries of accepted theories. While conservative researchers insist that only ideas published in well-known scientific journals and reproduced in textbooks can be considered scientific, cutting-edge researchers are looking for fundamentally new concepts, including those that were considered outside the scope of their disciplines just a few years ago.

More and more scientific disciplines describe the world in increasingly incredible ways. Cosmology has added dark matter, dark energy and multidimensional spaces to it; quantum physics - particles that are instantly connected in space-time at deeper levels of reality; biology - living matter, which demonstrates the integrity of quanta; and consciousness studies are transpersonal connections independent of space and time. These are just a few of the already confirmed scientific theories that are now considered full-fledged.

For the correct concept of the nature of our vacuum environment, the concept of the origin of the substance of the matrix vacuum environment and the nature of gravity in the vacuum environment, it is necessary to dwell in detail, of course, relatively, on the evolution of our Universe. What will be described in this chapter has been partially published in scientific and popular journals. This material from scientific journals has been systematized. And what is not known to science until now is filled in from the point of view of this theory. Our universe is currently in an expansion phase. In this theory, only the expanding and contracting Universe is accepted, i.e. non-stationary. A universe that is only ever expanding or stationary is rejected in this theory. For this type of Universes excludes any development, leads to stagnation, i.e. to the only universe.

Naturally, a question may arise. Why this description of the evolution of the Einstein-Friedmann Universe in this theory? This describes a probable model of a particle of media of the first kind of different levels. Where a logical interpretation is given about the processes of their occurrence, their cycle of existence in space and time, about the patterns of their volumes and masses for each environment of the corresponding level. Particles of media of the first kind have variable volumes, i.e. go through a cycle of expansion and contraction over time. But the media themselves of the first kind are eternal in time and infinite in volumes, fitting into each other, creating the structure of the structure of eternally moving matter, eternal in time and infinite in volume. In this case, it becomes necessary to describe the evolution of our Universe, from the so-called "Big Bang" to the present. When describing the evolution of the Universe, we will use what is currently known in the scientific world and hypothetically continue its development in space and time until it is completely compressed, i.e. before the next big bang.

This theory assumes that our Universe is not the only one in nature, but is a particle of the medium of another level, i.e. environment of the first kind, which is also eternal in time and infinite in volume. According to the latest data from astrophysics, our Universe has passed the stage of its development in fifteen billion years. There are still many scientists from the scientific world who doubt that the Universe is expanding or not expanding, others believe that the Universe is not expanding, and that there was no "Big Bang". Still others believe that the Universe does not expand or contract, it has always been constant and unique in nature. Therefore, it is necessary to indirectly prove in this theory that the "Big Bang" in all likelihood was. And that the universe is currently expanding and will then contract, and that it is not the only one in nature. Now the Universe continues to expand with acceleration. After the “Big Bang”, the emerging elementary matter of the matrix vacuum medium acquired an initial speed of expansion comparable to the speed of light, i.e. equal to 1/9 of the speed of light, 33,333 km/s.

Rice. 9.1. The Universe is in the phase of quasar formation: 1 – matrix vacuum medium; 2 - medium of elementary particles of matter; 3 - singular point; 4 - quasars; 5 - the direction of the scattering of the matter of the Universe

At present, scientists using radio telescopes have managed to penetrate into the depths of the universe for 15 billion light years. And it is interesting to note that as we go deeper into the abyss of the Universe, the speed of the receding matter increases. Scientists have seen objects of gigantic size, which had a receding speed comparable to the speed of light. What is this phenomenon? How is this phenomenon to be understood? In all likelihood, scientists saw the yesterday of the Universe, that is, the day of the young Universe. And these giant objects, the so-called quasars, were young galaxies in the initial stage of their development (Fig. 9.1). Scientists have seen the time when the universe originated the substance of the matrix vacuum in the form of elementary particles of matter. All this suggests that the so-called "Big Bang" in all likelihood was.

In order to hypothetically continue the further description of the development of our Universe, we must look at what surrounds us at the present time. Our Sun with its planets is an ordinary star. This star is located in one of the spiral arms of the Galaxy, on its outskirts. There are many galaxies like ours in the universe. It does not speak of an infinite set, since our Universe is a particle of the medium of another level. Forms and types of Galaxies that fill our Universe are very diverse. This diversity depends on many causes at the time of their occurrence at an early stage of their development. The main reasons are the initial masses and torques acquired by these objects. With the appearance of the elementary substance of the matrix vacuum medium and its non-uniform density in the volume occupied by it, numerous centers of gravity arise in the stressed vacuum medium. To these centers of gravity, the vacuum environment pulls elementary matter. Primordial giant objects, the so-called quasars, begin to form.

Thus, the emergence of quasars is a natural phenomenon in nature. How, then, from the original quasars, the Universe has acquired at the present time such a variety of forms and movements over 15 billion years of its development. Primordial quasars, which naturally arose as a result of the inconsistency of the matrix vacuum medium, began to be gradually compressed by this medium. And as the compression, their volumes began to decrease. With a decrease in volume, the density of an elementary substance also increases, and the temperature rises. Conditions arise for the formation of more complex particles from particles of elementary matter. Particles with the mass of an electron are formed, and neutrons are formed from these masses. The mass volumes of electrons and neutrons are determined by the elasticity of the matrix vacuum medium. The newly formed neutrons acquired a very strong structure. During this period of time, the neutrons are in the process of oscillatory motion.

Under the infinitely increasing onslaught of the vacuum environment, the neutron substance of the quasar gradually condenses and heats up. The radii of quasars also gradually decrease. And as a result, the speed of rotation around the imaginary axes of quasars increases. But, despite the radiation from quasars, which to some extent counteracts the compression, the process of compression of these objects inexorably increases. The medium of a quasar is rapidly moving towards its gravitational radius. According to the theory of gravitation, the gravitational radius is the radius of the sphere on which the gravitational force created by the mass of matter lying inside this sphere tends to infinity. And this force of gravity cannot be overcome, not only by any particles, but even by photons. Such objects are often called Schwarzschild spheres or the same thing, so-called "Black holes".

In 1916, German astronomer Karl Schwarzschild solved one of Albert Einstein's equations exactly. And as a result of this decision, the gravitational radius was determined equal to 2 MG/with 2 , where M is the mass of the substance, G is the gravitational constant, c is the speed of light. Therefore, the Schwarzschild sphere appeared in the scientific world. According to this theory, this Schwarzschild sphere, or the same "Black hole", consists of a medium of neutron matter of ultimate density. Inside this sphere, an infinitely large force of gravity, an extremely high density and high temperature dominate. At present, in certain circles of the scientific world, the opinion still prevails that in nature, in addition to space, there is also anti-space. And that the so-called “Black Holes”, where the matter of massive bodies of the Universe is pulled together by gravity, are associated with antispace.

This is a false idealistic trend in science. In nature, there is one space, infinite in volume, eternal in time, densely filled with eternally moving matter. It is now necessary to recall the moment of the emergence of quasars and the most important properties acquired by them, i.e. initial masses and torques. The masses of these objects did their job, drove the neutron matter of the quasar into the Schwarzschild sphere. Quasars that did not acquire torques for some reason or insufficient torques, after entering the Schwarzschild sphere, temporarily stopped their development. They turned into the hidden substance of the Universe, i.e. in Black Holes. It is impossible to detect them with conventional instruments. But those objects that managed to acquire sufficient torques will continue their development in space and time.

As they evolve over time, quasars are compressed by the vacuum environment. From this compression, the volumes of these objects decrease. But the torques of these objects are not reduced. As a result, the speed of rotation around its imaginary axes in gas and dust nebulae, of unimaginably large volumes, increases. Numerous centers of gravity arose, as well as for particles of the elementary matter of the matrix vacuum medium. In the process of development in space and time, constellations, individual stars, planetary systems and other objects of the Galaxy were formed from the contracted matter to the centers of gravity. The emerging stars and other objects of the Galaxy, which are very different in mass, chemical composition, the compression continues unceasingly, the circumferential speed of these objects also progressively increases. There comes a critical moment, under the action of an unimaginably large centrifugal force, the quasar explodes. There will be emissions of neutron matter from the sphere of this quasar in the form of jets, which will later turn into the spiral arms of the Galaxy. This is what we currently see in most of the Galaxies we see (Fig. 9.2).

Rice. 9.2. Expanding Universe: 1 – infinite medium of matrix vacuum; 2 - quasars; 3 - galactic formations

To date, in the process of development of the ejected neutron matter from the core of the Galaxy, star clusters, individual stars, planetary systems, nebulae and other types of matter have formed. In the Universe, most of the matter is in the so-called "Black holes" These objects with the help of conventional instruments are not detected and are invisible to us. But scientists indirectly detect them. The neutron matter ejected by centrifugal force from the nucleus of the Galaxy is not able to overcome the gravity of this nucleus of the Galaxy and will remain its satellite, dispersed in numerous orbits, continuing further development, rotating around the nucleus of the Galaxy. Thus, new formations appeared - Galaxies. Figuratively speaking, they can be called the atoms of the Universe, which are similar to planetary systems and atoms of matter with chemical properties.

Now, mentally, hypothetically, we will follow the course of development of neutron matter, which was ejected from the nucleus of the Galaxy by centrifugal force in the form of jets. This ejected neutron material was very dense and very hot. With the help of ejection from the core of the Galaxy, this substance was freed from the monstrous internal pressure and oppression of infinitely strong gravity, began to rapidly expand and cool. In the process of ejection of neutron matter from the nucleus of the Galaxy in the form of jets, most of the neutrons, in addition to their runaway motions, also acquired rotational motions around their imaginary axes, i.e. back. Naturally, this new form of motion, acquired by the neutron, began to give rise to a new form of matter, i.e. a substance with chemical properties in the form of atoms, from hydrogen to the heaviest elements of the D.I. Mendeleev.

After the processes of expansion and cooling, huge volumes of gas and dust, highly rarefied and cold nebulae were formed. The reverse process has begun, i.e. the contraction of a substance with chemical properties to numerous centers of gravity. At the moment of the end of the runaway of matter with chemical properties, it turned out to be in highly rarefied and cold gas and dust nebulae, of unimaginably large volumes. Numerous centers of gravity arose, also for the particles of the elementary matter of the medium of the matrix vacuum. In the process of development in space and time, constellations, individual stars, planetary systems and other objects of the Galaxy were formed from the contracted matter to the centers of gravity. The emerging stars and other objects of the Galaxy, very different in mass, chemical composition and temperature. Stars that absorbed large masses developed rapidly. Stars like our Sun have longer development times.

Other objects of the Galaxy, not gaining the appropriate amount of matter, develop even more slowly. And such objects of the Galaxy as our Earth, also, without gaining the appropriate amount of mass, in its development could only warm up and melt, keeping the heat only inside the planet. But for that, these objects created optimal conditions for the emergence and development of a new form of matter, living matter. Other objects are like our eternal companion. The moon, in its development, has not even reached the stage of warming up. According to the approximate definitions of astronomers and physicists, our Sun arose about four billion years ago. Consequently, the ejection of neutron matter from the core of the Galaxy occurred much earlier. During this time, processes took place in the spiral arms of the Galaxy that brought the Galaxy to its present form.

In stars that have absorbed tens or more solar masses, the development process proceeds very quickly. In such objects, due to their large masses and due to the high gravity, conditions for the onset of thermonuclear reactions arise much earlier. The resulting thermonuclear reactions proceed intensively in these objects. But as the light hydrogen in the star decreases, which is converted into helium, through a thermonuclear reaction, and as a result, the intensity of the thermonuclear reaction decreases. And with the disappearance of hydrogen completely stops. And as a result, the radiation of the star also drops sharply and ceases to balance the gravitational forces that tend to compress this large star.

After that, gravitational forces compress this star to a white dwarf with a very high temperature and a high density of matter. Further in its further development, having spent the energy of the decay of heavy elements, the white dwarf, under the onslaught of ever-increasing gravitational forces, enters the Schwarzschild sphere. Thus, a substance with chemical properties turns into a neutron substance, i.e. into the hidden matter of the universe. And its further development is temporarily stopped. It will continue its development towards the end of the expansion of the Universe. The processes that should take place inside stars such as our Sun begin with a gradual compression of the matrix vacuum by the environment, a cold, highly rarefied medium of gas and dust. As a result, pressure and temperature increase inside the object. Since the compression process proceeds continuously and with increasing force, the conditions for the occurrence of thermonuclear reactions gradually arise inside this object. The energy released during this reaction begins to balance the forces of gravity and the compression of the object stops. This reaction releases an enormous amount of energy.

But it should be noted that not only the energy that is released in the object from a thermonuclear reaction goes to radiation into space. A significant part of it goes to weighting light elements, starting with iron atoms to the heaviest elements. Since the process of weighting requires a large amount of energy. After the vacuum environment, i.e. gravity is rapidly compressed to a white or red dwarf star. After that, nuclear reactions will begin to take place inside the star, i.e. decay reactions of heavy elements to iron atoms. And when there is no source of energy in the star, then it will turn into an iron star. The star will gradually cool down, lose its luminosity and in the future will be a dark and cold star. Its development in space and time in the future will completely depend on the development in space and time of the Universe. Due to the lack of mass for this, an iron star will not enter the Schwarzschild sphere. Those changes in the expanding matter of the Universe that occurred after the so-called "Big Bang" are described in this theory to the present moment. But the substance of the Universe continues to scatter.

The speed of the escaping matter increases with every second, and changes in the matter continue. From the point of view of dialectical materialism, matter and its movement are not created and cannot be destroyed. Therefore, matter in micro and mega worlds has an absolute speed, which is equal to the speed of light. For this reason, in our vacuum environment, any material body cannot move above this speed. But since any material body has not only one form of motion, but can also have a number of other forms of motion, for example, translational motion, rotational motion, oscillatory motion, intra-atomic motion and a number of other forms. Therefore, the material body has a total speed. This total speed should also not exceed the absolute speed.

From this we can assume about the changes that should occur in the expanding matter of the Universe. If the speed of the escaping matter of the Universe increases with each second, then the intra-atomic speed of movement increases in direct proportion, i.e. the speed of the electron around the nucleus of the atom increases. The spins of the proton and electron also increase. The speed of rotation of those material objects that have torques will also increase, i.e. nuclei of Galaxies, stars, planets, "Black holes" from neutron matter and other objects of the Universe. Let us describe, from the point of view of this theory, the decay of a substance with chemical properties. Thus, the process of decomposition of a substance with chemical properties proceeds in stages. As the speed of the expanding matter of the Universe changes, the circumferential velocities of objects that had torques increase. The floor of the increased centrifugal force breaks up stars, planets and other objects of the Universe to atoms.

The volume of the Universe is filled with a kind of gas, consisting of various atoms, which randomly move in the volume. The processes of decay of matter with chemical properties continue. The spins of protons and electrons increase. For this reason, the repulsive moments between protons and electrons increase. The vacuum environment ceases to balance these repulsive moments, and the atoms decay, i.e. electrons leave atoms. It arises from a substance with the chemical properties of a plasma, i.e. protons and electrons will randomly mix separately in the volume of the Universe. After the decay of matter with chemical properties, due to the increase in the speed of the expanding matter of the Universe, they begin to break down, or rather break into particles of the elementary matter of the vacuum environment, the nuclei of Galaxies, "black holes", neutrons, protons and electrons. The volume of the Universe, even before the end of the expansion, is filled with a kind of gas from the elementary particles of the substance of the vacuum medium. These particles move randomly in the volume of the Universe, and the speed of these particles increases every second. Thus, even before the end of the expansion, there will be nothing in the Universe, except for a kind of gas (Fig. 9.3).

Rice. 9.3. Maximally expanded Universe: 1 – matrix vacuum medium; 2 - the sphere of the maximally expanded Universe; 3 - the singular point of the Universe - this is the moment of the birth of the young Universe; 4 - gaseous medium of elementary particles of the substance of the medium of the matrix vacuum

After all, the substance of the Universe, i.e. the peculiar gas will stop for a moment, then, under the pressure of the response reaction of the matrix vacuum medium, it will begin to rapidly pick up speed, but in the opposite direction, towards the center of gravity of the Universe (Fig. 9.4).

Rice. 9.4. Universe in the initial phase of contraction: 1 – matrix vacuum medium; 2 – matter of elementary particles falling towards the center; 3 – influence of the environment of the matrix vacuum of the Universe; 4 - directions of falling of elementary particles of matter; 5 - expanding singular volume

The process of compression of the Universe and the process of decay of its substance in this theory are combined into one concept - the concept of the gravitational collapse of the Universe. Gravitational collapse is a catastrophically fast compression of massive bodies under the influence of gravitational forces. Let us describe the process of the gravitational collapse of the Universe in more detail.

Gravitational collapse of the universe

Modern science defines gravitational collapse as a catastrophically rapid compression of massive bodies under the influence of gravitational forces. A question may arise. Why is it necessary to describe this process of the Universe in this theory? The same question arose at the beginning of the description of the evolution of the Einstein-Friedmann Universe, i.e. nonstationary universe. If in the first description, a probable model of a particle of media of the first kind of different levels was proposed. According to this theory, our Universe was defined as a particle of the medium of the first level and is a very massive body. That second description, i.e. the mechanism of the gravitational collapse of the Universe is also necessary for the correct concept of the end of the cycle of the existence of the Universe in space and time.

If we briefly state the essence of the collapse of the Universe, then this is the response of the matrix vacuum medium to its maximum expanded volume. The process of compression of the Universe by the vacuum environment is the process of restoring its full energy. Further, the gravitational collapse of the Universe is the reverse process of the process of the emergence of matter in the matrix vacuum medium, i.e. matter of the new young universe. Earlier it was said about the changes in the matter of the Universe from the increase in the speed of its receding matter. Due to this increase in speed, the matter of the Universe disintegrates into elementary particles of the vacuum medium. This decay of matter, which was in different forms and states, occurred long before the beginning of the compression of the Universe. At a time when the Universe was still expanding, there was a kind of gas in its volume, which evenly filled this entire expanding volume. This gas consisted of elementary particles of the substance of the matrix vacuum medium, which moved randomly in this volume, i.e. in all directions. The speed of these particles increased every second. The resultant of all these chaotic displacements is directed to the periphery of the expanding Universe.

At the moment when the speed of the chaotic movement of particles of a kind of gas falls to zero speed, the entire substance of the Universe, in its entire volume, will stop for a moment, And from zero speed, in its entire volume, it will begin to rapidly pick up speed, but in the opposite direction, i.e. to the center of gravity of the universe. At the moment of the beginning of its compression, the process of matter falling along the radius occurs. After 1.5 ... 2 seconds after the moment of the beginning, the process of disintegration of particles of elementary matter occurs, i.e. matter of the old universe. In this process of falling matter of the old Universe throughout the entire volume, collisions of falling particles from diametrically opposite directions are inevitable. These particles of elementary matter, according to this theory, contain particles of the matrix vacuum medium in their structure. They move in the vacuum medium at the speed of light, i.e. carry the maximum amount of movement. Upon collision, these particles generate the initial medium of singular volume at the center of the contracting Universe, i.e. at the singular point. What is this Wednesday? This medium is formed from extra particles of the matrix vacuum and ordinary vacuum particles. Excess particles move in this volume with the speed of light relative to the particles of this volume. The medium of the singular volume itself expands at the speed of light, and this expansion is directed to the periphery of the shrinking Universe.

Thus, the process of decay of the matter of the old Universe includes two processes. The first process is the fall of the substance of the old Universe towards the center of gravity with the speed of light. The second process is the expansion of the singular volume, also with the speed of light, towards the falling matter of the old Universe. These processes occur almost at the same time.

Rice. 9.5. A new developing Universe in the space of an expanded singular volume: 1 – matrix vacuum medium; 2 – remains of matter of elementary particles falling towards the center; 3 - gamma radiation; 4 – maximum mass singular volume; 5 is the radius of the maximally expanded Universe

The end of the process of falling of the matter of the old Universe into the medium of singular volume gives rise to the beginning of the process of the emergence of the matter of the new young Universe (Fig. 5.9). The emerging elementary particles of the medium of the matrix vacuum of the surface of the singular volume chaotically scatter with an initial speed of 1/9 of the speed of light.

The process of falling matter of the old Universe and the expansion of the singular volume are directed towards each other at the speed of light, and the paths of their movement must be equal. Based on these phenomena, it is also possible to determine the total radius of the maximally expanded Universe. It will be equal to twice the path of the receding newly emerged substance with an initial receding speed of 1/9 of the speed of light. This is where the answer to the question of why the description of the gravitational collapse of the Universe is needed will lie.

After presenting in this theory the process of emergence and development in space and time of our Universe, it is also necessary to describe its parameters. These main parameters include the following:

  1. Determine the acceleration of the receding matter of the universe in one second.
  2. Determine the radius of the Universe at the time of its expansion of matter.
  3. Determine the time in seconds of the process of expansion of the Universe from the beginning to the end of the expansion.
  4. Determine the area of ​​the sphere of the expanded mass of the matter of the Universe in square meters. km.
  5. Determine the number of particles of the matrix vacuum medium that can fit on the area of ​​the maximally expanded mass of matter in the Universe and its energy.
  6. Determine the mass of the universe in tons.
  7. Determine the time until the end of the expansion of the universe.

We determine the acceleration of the receding matter of the Universe, the increase in the receding speed in one second. To solve this issue, we will use the results that were previously discovered by science, Albert Einstein in the general theory of relativity determined that the Universe is finite. And Friedman said that the Universe is currently expanding, and then it will contract, science with the help of radio telescopes has penetrated fifteen billion light years into the abyss of the Universe. Based on these data, it is possible to answer the questions posed.

From kinematics it is known:

S = V 0 – at 2 /2,

where V 0 is the initial takeoff velocity of the matter of the Universe and, according to this theory, is equal to one ninth of the speed of light, i.e. 33,333 km/s.

S = Vtat 2 /2,

where V 0 – initial speed; S- the distance of the path, which is equal to the path of light for fifteen billion years in kilometers, it is equal to 141912 10 18 km (this path is equal to the distance of the receding matter of the Universe to the present moment); t– time equal to 15·10 9 years, in seconds – 47304·10 13 .

Determine the acceleration:

a = 2 (SV 0 · t) 2 / t= 2 / 5637296423700 km/s.

Calculate the time required for the full expansion of the universe:

S = V 0 · t + at 2 /2.

At S = 0:

V 0 · t + at 2 /2 = 0.

t= 29792813202 years

Until the end of the extension left:

t- 15 10 9 \u003d 14792913202 years.

We determine the value of the path of the expanding matter of the Universe from the beginning of the expansion to the end of the expansion.

In the equation:

S = V 0 · t + at 2 /2

material escape velocity V 0 = 0, then

S = V 0 2 / 2a= 15669313319741 10 9 km.

As already mentioned earlier, the moment of cessation of the increase in the mass of the singular volume coincides with the moment of the end of the compression of the old Universe. That is, the existence of a singular volume almost coincides with the time of the dispersion of matter:

S = V 0 · t.

From the point of view of dialectical materialism, it follows that if an end comes for one natural phenomenon, then this is the beginning of another natural phenomenon. The question naturally arises, how does the scattering of the newly arisen matter of the new young Universe begin?

In this theory, acceleration is defined, i.e. increase in the speed of the expanding matter of the Universe. The time of the maximum, complete expansion of the Universe is also determined, i.e. to zero velocity. The process of change in the expanding matter of the Universe is described. Further, the physical process of the decay of the matter of the Universe was proposed.

According to the calculation in this theory, the true radius of the maximally expanded Universe consists of two paths, i.e. the radius of the singular volume and the path of the expanding matter of the Universe (Fig. 5.9).

According to this theory, the substance of the matrix vacuum medium is formed from particles of the vacuum medium. Energy was spent on the formation of this substance. The mass of an electron is one of the forms of matter in the vacuum medium. To determine the parameters of the Universe, it is necessary to determine the smallest mass, i.e. the mass of a particle of the medium of the matrix vacuum.

The mass of an electron is:

M e \u003d 9.1 10 -31 kg.

In this theory, an electron consists of elementary particles of the substance of the matrix vacuum medium, i.e. elementary quanta of action:

M email = h · n.

Based on this, it is possible to determine the number of extra particles of the matrix vacuum medium, which are included in the structure of the electron mass:

9.1 10 -31 kg = 6.626 10 -34 J s n,

where n is the number of excess particles of the matrix vacuum medium included in the structure of the electron mass.

Let us reduce in the left and right parts of the equation J s and kg, because the elementary mass of a substance represents the amount of motion:

N= 9.1 10 -31 / 6.626 10 -34 = 1373.

Let us determine the number of particles of the matrix vacuum medium in one gram of mass.

M el / 1373 = 1 gr / k,

where k- the number of particles of the vacuum medium in one gram.

k = 1373 / M el \u003d 1.5 10 30

The number of particles of the vacuum medium in the mass of one ton of matter:

m = k 10 6 \u003d 1.5 10 36.

This mass includes 1/9 of the impulses of the vacuum medium. This is the number of elementary impulses in the mass of one ton of matter:

N = m/ 9 \u003d 1.7 10 35.

V e = 4π r 3/3 \u003d 91.0 10 -39 cm 3,

where r is the classical electron radius.

Let us determine the volume of a particle of the matrix vacuum medium:

V m.v. = V e / 9π \u003d 7.4 10 -42 cm.

Where can we find the radius and cross-sectional area of ​​a particle of the matrix vacuum medium:

R m.v. = (3 V m.v. / 4π) 1/3 \u003d 1.2 10 -14 cm.

S m.v. = π R m.v. \u003d 4.5 10 -38 km 2.

Therefore, in order to determine the amount of energy that is contained in the irresistibly large volume of the receiver, it is necessary to calculate the surface area of ​​\u200b\u200bthis receiver, i.e. area of ​​the maximally expanded universe

S sq. = 4π R 2 \u003d 123206365 10 38 km 2.

Let us determine the number of particles of the matrix vacuum medium that can be accommodated on the area of ​​the sphere of the maximally expanded mass of the matter of the Universe. This requires the value S sq. area divided by the cross-sectional area of ​​a particle of the matrix vacuum medium:

Z in = S sq. / S c \u003d 2.7 10 83.

According to this theory, the formation of one elementary particle of the matrix vacuum medium requires the energy of two elementary impulses. The energy of one elementary impulse is spent on the formation of one particle of the elementary substance of the matrix vacuum medium, and the energy of another elementary impulse gives this particle of substance the speed of movement in the vacuum medium, equal to one ninth of the speed of light, i.e. 33,333 km/s.

Therefore, the formation of the entire mass of matter in the Universe requires half the number of particles of the matrix vacuum medium, which fill in one layer its maximum expanded mass of matter:

K = Z c / 2 \u003d 1.35 10 83.

To determine one of the main parameters of the Universe, i.e. mass in tons or the substance of the vacuum medium, it is necessary to divide half of its number of elementary impulses by the number of elementary impulses that are included in one ton of the substance of the vacuum medium

M = K / N= 0.8 10 48 tons

The number of particles of the vacuum medium that fill the area of ​​the sphere of the maximum expanded mass of the matter of the Universe into one layer. And according to the principle of the receiver, which is accepted in this theory. This number of particles is the number of elementary impulses that form the mass of matter and are included in the structure of the Universe. This number of elementary impulses is the energy of the Universe created by the entire mass of matter. This energy will be equal to the number of elementary impulses of the medium multiplied by the speed of light.

W = Z in s \u003d 2.4 10 60 kg m / s

After the above, a question may arise. What is the nature of the expansion and contraction of our Universe?

After determining the basic parameters of the Universe: radius, mass, expansion time and its energy. It is necessary to pay attention to the fact that the maximally expanded Universe did the work with its receding matter, i.e. with its energy, in the vacuum environment by the force expansion of the particles of the matrix vacuum environment, the compression of these particles by a volume that is equal to the volume of the entire substance of the Universe. And as a result, this energy, determined by nature, was expended on this work. According to the Big Receiver principle adopted in this theory and the natural elasticity of the vacuum medium, the process of expansion of the Universe can be formulated as follows.

At the end of the expansion, the particles of the expanded sphere of the Universe acquire equal repulsive moments with the particles of the vacuum medium that surround this sphere. This is the cause of the end of the expansion of the universe. But the enclosing shell of the vacuum medium is larger in volume than the outer shell of the sphere of the Universe. This axiom does not require proof. In this theory, the particles of the matrix vacuum medium have an internal energy equal to 6.626·10 –27 erg·s. Or the same amount of movement. From the inequality in volumes, the inequality in the quantities of movements also arises, i.e. between the sphere of the Universe and the vacuum environment The equality of repulsive moments between the particles, the maximally expanded sphere of the Universe and the particles of the matrix vacuum medium, which enclose this sphere, stopped the expansion of the Universe. This equality lasts for one moment. Then this substance of the Universe rapidly begins to pick up the speed of movement, but in the opposite direction, i.e. to the center of gravity of the universe. The compression of matter is the response of the vacuum medium. According to this theory, the response of the matrix vacuum medium is equal to the absolute speed of light.


We present to you a completely new view of the origin of the Universe, developed by a group of theoretical physicists from the University of Indiana and presented by Nikodim Poplavsky, an employee of this university.
Every black hole contains a new universe, ours is no exception, it also exists inside a black hole. Such a statement may seem strange, but it is this assumption that best explains the birth of the Universe and the course of all the processes that we observe today.
The standard Big Bang theory fails to answer many questions. It suggests that the universe began as a "singularity" of an infinitesimal point containing an infinitely high concentration of matter expanding its size to the state we observe today. The theory of inflation, the super-rapid expansion of space, of course answers many questions, such as why it was not large pieces of concentrated matter at the early stage of the development of the Universe that united into large celestial bodies: galaxies and clusters of galaxies. But many questions remain unanswered. For example: what began after the Big Bang? What caused the Big Bang? What is the source of the mysterious dark energy that comes from beyond the boundaries of the Universe?
The theory that our universe is entirely inside a black hole provides answers to these and many other questions. It excludes the notion of physically impossible features of our universe. And it relies on two central theories of physics.
First, it is the general theory of relativity, the modern theory of gravity. It describes the universe on a large scale. Any event in the Universe is considered as a point in space, and time, and space-time. Massive objects such as the Sun distort or create "curves" of space-time comparable to a bowling ball resting on a suspended canvas. A gravitational dent from the Sun alters the motion of the Earth and other planets orbiting it. The attraction of the planets by the Sun appears to us as a force of gravity.
The second law of quantum mechanics, on which the new theory is based, describes the Universe on the smallest scales, such as an atom and other elementary particles.
Currently, physicists are striving to combine quantum mechanics and general relativity into a single theory of "quantum gravity" in order to adequately describe the most important natural phenomena, including the behavior of subatomic particles in black holes.
In the 1960s, an adaptation of general relativity to take into account the effects of quantum mechanics was called the Einstein-Carton-Sciama-Kibble theory of gravity. It not only provides a new step towards understanding quantum gravity, but also creates an alternative picture of the world. This variation of general relativity includes an important quantum property of the mother known as SPINOM.
The smallest particles, such as atoms and electrons, have SPINOM, or internal angular momentum, similar to the rotation of a skater on ice. In this picture, the SPIN of particles interacts with space-time and provides it with a property called "torsion". To understand this twisting, think of space not as a two-dimensional canvas, but as a flexible one-dimensional rod. The bending of the rod corresponds to spatio-temporal twisting. If the rod is thin, you can twist it, but it's hard to see if it's twisted or not.
The twisting of space should be noticeable, or rather, very significant at an early stage of the origin of the Universe or in a black hole. Under these extreme conditions, the twisting of space-time should manifest itself as a repulsive force or gravity for the closest objects from the curvature of space-time.
As with the standard version of general relativity, very massive stars end up falling into black holes: regions of space from which nothing, not even light, can escape.
Here is what role the twisting process can play in the initial moment of the birth of the universe:
Initially, the gravitational attraction of curved space will allow twisting to be turned into a repulsive force, leading to the disappearance of matter in smaller regions of space. But then the twisting process becomes very strong, turning into a point of infinite density, reaching a state of extremely large, but finite density. Since energy can be converted into mass, the very high gravitational energy in this extremely dense state can cause intense particle creation, which greatly increases the mass inside the black hole.
An increasing number of particles with SPIN will lead to a higher level of spatiotemporal twisting. The repulsive moment of twisting can stop the collapse of matter and create the effect of a “big bounce” resembling a ball flying out of the water beforehand, which will lead to the process of an expanding universe. As a result of this, we observe the processes of mass distribution, shape and geometry of the universe corresponding to this phenomenon.
In turn, the torsion mechanism offers an amazing scenario, based on which each black hole is able to produce a new, young Universe inside itself.
Thus, our own universe can be inside a black hole located in another universe.
Just as we cannot see what is happening inside a black hole, any observers in the parent universe cannot see what is happening in our world.
The movement of matter through the boundary of a black hole is called the "event horizon" and occurs in only one direction, providing the direction of the time vector, which we perceive as forward movement.
The arrow of time in our Universe, we inherited from the parent Universe, through the process of twisting.
Twisting can also explain the observed imbalance between matter and antimatter in the universe. Finally, the twisting process may be the source of dark energy, a mysterious form of energy that pervades all of our space, increasing the expansion rate of the universe. The twisting geometry produces a "cosmological constant" that extends to external forces and is the simplest way to explain the existence of dark energy. Thus, the observed accelerating expansion of the universe may be the strongest evidence for a twisting process.
Twisting therefore provides the theoretical basis for a scenario in which a new universe exists within each black hole. This scenario also acts as a means of solving several major problems in modern gravity theory and cosmology, although physicists still need to combine the quantum mechanics of Einstein-Carton-Sciama-Kibble with the quantum theory of gravity.
Meanwhile, the new understanding of cosmic processes raises other important questions. For example, what do we know about the parent universe and the black hole that contains our own universe? How many layers of the parent universe do we have? How can we check that our universe is in a black hole?
Potentially the latter questions can be explored, since all stars and black holes rotate, our universe should have inherited the parent universe's axis of rotation as the "preferred direction".
A recent survey of 15,000 galaxies in one hemisphere of the universe found that they are "left", that is, rotate clockwise, while in the other hemisphere, galaxies are "right" or counterclockwise. But this discovery still requires reflection. In any case, it is now clear that the process of twisting in the geometry of space-time is the right step towards a successful theory of cosmology.

Cosmology can be conditionally divided into three areas. 1. Stationary Universe with a unified approach about the aging of radiation proportional to ~ t ½ , variations of this model allow solving almost all cosmological problems, except for one - this is relic radiation. A relic, like radiation, also ages, then in the distant past its energy was much higher, up to the plasma state of all matter, i.e. The universe must change over time, which contradicts the very essence of the term stationarity. 2. Many-sided Universe - zero starting variant of the total energy. In hyperspace, an innumerable set of Universes can form, and each has its own physics, its own laws, these are one-time balanced models. Dark energy has questioned the feasibility of this direction: the zero starting variant of the total energy is violated, which automatically leads to imbalance, the Universe begins to expand rapidly. 3. The cyclic Universe, until the 80s, was considered the most promising direction, therefore it has a great variety in physical constructions. But at the moment it does not fit in with cosmological acceleration at all, there is no phase of transition from expansion to contraction. You are invited to consider a scientific article where, based on a new approach to the physical essence of the balance of the dynamics of the development of the Universe, you can explain the nature of the origin of dark matter and dark energy, the anomaly of the pioneers, the physical meaning of the relationship of large numbers and, to some extent, a new look at the anthropic principle.

Abbreviations

BV --- big bang

VYA --- vacuum cell

GK --- gravitational collapse

GZ --- gravitational charge

Rnrnrn rnrnrn rnrnrn

GP --- gravitational potential

EC --- elementary particle

FV --- physical vacuum

SRT --- special theory of relativity

GR --- general theory of relativity

QED --- quantum electrodynamics

ZSE --- law of conservation of energy

Theory of a unified physical universe (TEPV)

The mat/device used is purely indicative.

Before getting into the essence of the TEPW, it is necessary to consider modern theoretical and experimental developments in the origin and development of the Universe, then it will be easier for us to see the emergence of questions that have not yet been answered. Let's start with the original basic material, the BV theory with a version of the inflationary beginning.

Rnrnrn rnrnrn rnrnrn

Inflationary Universe (designed by A. Gut A. Linde)

Every effect needs a cause. Inflation is a consequence, the cause is clearly absent. Let us consider the question of interactions purely philosophically. All theories about the Universe agree that at the beginning of BV all forces were united, there was a single superpower (Theory of Supergravity or Superstrings). As the Universe expanded, the forces separated, acquiring their individuality in the form of fundamental constants. In the future, the Universe went through a whole stage of transformations to obtain the source material in the form of EC and quanta. The question arises: if this principle is one-time (an open model of the Universe), then how can the born Universe know about the existence of all forces, if before that there was nothing but PV. Nature cannot invent itself creating diversity, which means that these forces were closed, somewhere incorporated in the PV. Any law of nature is formed and acts in reality, in other words, in order to close any interaction, you must first really exist (act). And this means: before the BV, the Universe must have existed, which closed and brought the mechanism of the BV into action, i.e. The Universe is cyclic (closed model of the Universe). Then what kind of force or forces govern the cycle of the Universe, undoubtedly the key role in this process is played by the balance of the dynamics of the development of the Universe.

What is the essence of balance?

According to Friedman, the universe can be open or closed. Balance is just the line between open and closed, i.e. "Inflation" created the conditions for equality of the force of the explosion, in the future inertia, with the gravity of space. In order to understand what the essence of this equality is, let us model the ideal Universe in accordance with the strict solution of the BV scenario, by introducing ephemeral theoretical EPs. We will consider the initial starting conditions of the BV as the Planck era, the initial state after inflation, then the EP mass is equal to the Planck mass M plank=10 -8 kg distance between them Lplank\u003d 10 -35 m, the starting speed of expansion is equal to the speed of light. The expansion of the Universe obeyed the following laws (from the BV theory). Let be n- the number of particles that fit along the line of the diameter of the Universe, then the expansion rate, during the time of signal passage between neighboring particles (layers), starting from With falls like Vext.=C/n, (where n= 1.2. etc.) i.e. at the moment of BV, all particles were causally unrelated, respectively, the distances between adjacent layers grow as Lext=Lplank*n, in the same order, according to QED, the SP mass decreases \u003d M plank / p.(we assume that the rest mass of the introduced SP is always equal to M ext). The scope of the universe corresponds to the arrow of time Rall L=C*Dt, it is easy to prove that Rall L = Lplank* n 2 , a LextWITH*Dtall L* Lplank. The Universe began to expand approximately 13.7 billion years ago, then in the modern era Lext= 10 -4.5 m, i.e. 10 1.5 times less than Lrelic, the size of the universe Rall L=C*Dt=10 26 m, then the number of layers n= Ö Rall L/ Lplank=10 30.5 . So, the size of the universe starting from Lplank* n rose to = Lplank* n 2 , stretching step, starting from Lplank rose to Lplank* n. Energy, respectively, starting from Eplank=10 8 j, decreased to E ext\u003d 10 -22.5 j. Balance means equality of gravity g*M 2 ext /Lext with expansion inertia M ext *V 2 ext, we generalize this condition to the entire arrow of time M ext \u003d M plank / p, Vext.=C/n, Lext=Lplank*n, then g*M 2 plank /Lplank* n 3 = M plank with 2 /n 3 , i.e. BV theory in its ideal version strictly but locally maintains a balance. Note that in building a model of the Universe, due to Rall L = Lplank* n 2 =C*Dtall L only one observable parameter is used Dtall L= 13.7 billion, everything else is QED constants, then the mass of the Universe is determined by a simple relationship:

M universe \u003d M plank *Dtall L/tplank\u003d 10 -8 * 10 18 / 10 - 43 \u003d 10 53 kg, therefore:

g *M universe/ Rall L= g *M plank/ Lplank\u003d C 2

And this means that the balance of the dynamics of the development of the Universe, based on the homogeneity and isotropy of space, requires the invariance of the Gravitational Potential (GP) at all points in space and throughout the arrow of time, the assumption is controversial and requires additional argumentation. Let's consider how the HP is formed at the stage of expansion of the Universe, based on the following considerations. The main contribution to the formation of the HP is played by distant masses, since their number increases with distance proportionally n 2 , besides this, the gravitational effect of distant masses obeys the law of cosmological expansion, so the mass of ephemeral EPs with acceptable accuracy can be considered equal to M ext at any point in space. Then the result of integrating mass layers over the entire volume will be GP equal to:

F(t) =g* M sat (t)/ Rall L(t) = g* M ext* n 3 / Lplanck*n 2 = g *M plank/ Lplank\u003d C 2

Those. we have proved that if the introduced ephemeral EPs obey QED, then in the balanced Universe the HP is a constant and is equal to From 2 at least in the expansion phase. Let us pay attention to the consequence of the equality of GP to the constant From 2 there is a constant scale factor Rall L(t) ~ t 1/2 throughout the arrow of time, such a model of the universe must be flat. And what the real Universe gives us, let's consider how the HP behaves in terms of the mass of all ECs in the modern era.

F(t) = g* M sat (t)/ Rall L(t) = g* Mnuk* n 3 / WITH*Dtall L(where p=10 26.5) =10 15 those. less than C 2 .

For analysis, we will choose one more period of time, the era of recombination: Dt int = 10 13 sec, F(10 13 sec) =g* Mnuk* n 3 / WITH*Dtall L(where n =10 24)=10 13

We see that even without taking into account the change in the scale factor, the mass of the Universe practically does not play any role in the balance. Let us consider the GP for cosmic microwave background radiation in the recombination epoch:

F(10 13 sec) = g* Mrel * n 3 / WITH*Dtall L = 10 17 Where Mrel \u003d 10 -35 kg. n=10 27

The potential is stable and almost equal From 2, at the present stage due to a change in the scale factor from Rall L(t) ~ t 1/2 to ~ t 2/3 , the relic in the balance practically does not play any role, what this is leading to. The theory of the development of the Universe is based on the idea of ​​the most severe balance, but the modern theory of gravity does not provide a mechanism for its observance, we, with different ratios of matter and radiation, get a different scenario for the development of the Universe, and this is already alarming. We still need to figure out what kind of ideal ephemeral ECs correspond to an ideal balanced Universe, whether they actually exist. The general picture of the development of the Universe says one thing, everything is interconnected, while in an incomprehensible way, for some reason, gravity globally and locally is absolutely exactly always and everywhere equal to the expansion inertia. In addition, calculations of the masses of galaxy clusters, gravitational lensing, give an unambiguous conclusion: the mass of the real Universe must be 4-5 times heavier, it is present, but we do not see it. This is generally recognized real dark matter, dead to all interactions except gravity. And interestingly, taking into account this matter, the theoretical and experimental calculations of the average density of matter in the Universe completely coincided and corresponded to the balance (critical) RCrete= 10 -29 g / cm 3. Let us analyze this variant of the origin of the Universe, and also state the key prerequisites, i.e. foundation for the emergence of TEFV.

Arguments and Facts

Inflation solved the problem of balance, but dragged along a trail of new problems. In fact, we have the emergence of the Universe from nothing, and in order not to violate the law of conservation of energy, the concept of the total energy of the Universe equal to zero is introduced, the negative energy grows, then the positive energy must grow in the same order, in inflation these two processes are separated in time, correctly whether it. Further, in the period of inflation, the inhomogeneities necessary for the formation of galaxies should be laid, which is done, laying the "freeze" of vacuum fluctuations. Countless vacuum bubbles can form in the PV, and each has its own Universe with its own physics. Does it make sense to consider the diversity of the Universes with their own laws, which do not have any influence on each other. The end result of inflation was to be either the theory of Superstrings or the theory of Supergravity, i.e. fundamental constants must somehow be interconnected, flow from something, this problem in inflation has remained open.

Let us touch more specifically on the problem of causality. The emergence of a causally related vacuum bubble, a spontaneous process, which eventually, absolutely causally, breaks up into 10 91.5 causally unrelated areas, is there a conflict here. Can this conflict be resolved in the following way. Inflation allows for the appearance and immediate collapse of unripe vacuum bubbles, but is a complete reverse process possible, for example, the collapse of our Universe, then reverse inflation and, as a result, the collapse of a vacuum bubble, according to the idea, it is not prohibited. Can this event be considered the cause of inflation, i.e. we sort of loop the process. Inflation is an elegant theory, but such an assumption makes it more pure and complete. We finally have a closed cyclic system reproducing itself according to the laws of precisely our physics. But here we are faced with one significant cosmological problem, which is not compatible with the version of the cyclical nature of the Universe. It turns out that the Universe, closer to the modern era, is slowing down not as prescribed by the Hubble law. To explain this behavior, the concept of dark energy was introduced, the negative pressure of which remains unchanged as the Universe expands. Approximately 7 billion years ago, the negative pressure became equal to the gravity of space and dominates in the modern era, the Universe began to expand, while accelerating. Dark energy has no physical explanation, upsets the balance, practically puts an end to the purity of the theory of inflation, nature has not yet presented us with a discovery more ridiculous in its harmfulness. The Universe is developing somehow strangely, at first it was necessary to introduce dark matter, then dark energy, and at the present stage, having reached its maximum, dark energy does not manifest itself at all on small scales. Nature demanded the introduction of two completely opposite concepts, but separated in time, something is wrong here. The best solution to the problem that has arisen is not to build theories about the nature of the origin of dark matter and energy, but simply to get rid of them. The inconsistency of the radiation intensity of supernovae with the spectrum of galaxies, the absence of large clusters of galaxies in the modern era, perhaps this is a disguise of “something under something” that does not require an accelerated expansion of the Universe at all. The mechanism for controlling the cycle of the Universe proposed below gives one interesting consequence directly related to the effects under the interpretation of dark matter and energy. To understand what the essence is here, it is necessary to observe the phased nature of the theory presented, so the version of the cyclic Universe with an inflationary beginning is taken as the starting position for constructing the TEFV.

gravity

The lack of causality in the emergence of the Universe and processes in the physics of the microworld have one common feature from a philosophical point of view. The accuracy of the applied laws is absolute, but their manifestation is probabilistic in nature, leading to a scatter of the measured parameters (uncertainty principle). This can be stated very carefully and so, the more accurately we try to measure the accuracy of one law (parameter), the greater the spread of another law (parameter) we get, translating into philosophical language, we state: the reason for the accuracy of the law at a given moment, at a given area there is an inaccuracy in the operation of another law. Some kind of “principle of inconsistency”, the principle of uncertainty is not denied here - this is the basis of QED, the matter is different, we get a real causal relationship from chains of causeless events, maybe the essence here is completely different. Let us assume that an unmeasurable process lies in all these spreads, i.e. there is a reason, but it is impossible to detect (measure) it. Such unmeasurable effects are unexpectedly presented to us by Einstein's theory. Let us consider the most important consequences of Einstein's SRT and GR.

Rnrnrn rnrnrn rnrnrn

Einstein's General Relativity says that gravity is not a force, it is a curvature of space, the body, as it were, automatically chooses the shortest path of movement (the principle of laziness), i.e. the source of gravity (mass) changes the geometry of space. Gravity has no screens, it has a cumulative character, it equally affects both mass and radiation. Let us consider in more detail the assertion of the equivalence of the gravitational field and accelerated mechanical motion, for example, in a rapidly moving closed system, we will feel gravity and it is impossible to prove by any experiments that it was created artificially. Being inside this non-inertial system, we get all the signs of gravity, i.e. accelerated motion creates a gravitational field. And vice versa, gravitation, having created an accelerated motion of an object, removes all the inertial features of the object. It turns out the following picture: the body is moving rapidly in some kind of medium, then the reaction of the medium to this process is the creation of a gravitational field and vice versa, the medium cancels all signs of inertia, while creating movement in the gravitational field. Conclusion, the action of the gravitational field and inertia on space is identical and has a local character. And what place SRT takes in gravity, the principle of relativity says: it is impossible to determine the absoluteness of motion, while it is impossible to deal with the effects of SRT, for example, over time, if it is impossible to determine what is moving. And here the judge in this dispute is acceleration, which accelerates (slows down) and SRT acts on that. But accelerated motion creates a gravitational field. Having ceased to accelerate, we simply moved into a uniform gravitational field with our GP in accordance with the achieved speed. As a matter of fact, SRT is the theory of a homogeneous gravitational field, then the effects of SRT and gravity are indistinguishable. Here the conversation is not about the equivalent, but about the unified nature of the occurrence of effects, i.e. reaction of the environment. And physically what is the primary source of all effects, for example, time dilation, GPU or speed. Let's consider a simple example. Let the body be on the Earth, naturally, under the influence of gravity, its own time has slowed down (there is no movement). Let's place the body in the center of the Earth. Let's pay attention to an important point, there is gravity, but there is no gravitation, calculations show that the GP has decreased by 2 times, respectively, the time dilation has decreased (there is no movement). Now let the body move over the surface of the Earth with the 1st cosmic velocity. There is no gravity, the calculations give an increase in deceleration compared to the time of the body on Earth, i.e. on the GP of the Earth, the formed GP is superimposed due to movement. We see the slowdown of time is not connected with the movement as such, but with the process of creating the HP, i.e. space (PV) reacts to a change in motion by changing its own GP. Let's summarize.

1. According to Einstein's GR, gravity is a curvature of space, then since there is an effect (gravity) and there is a reaction to this effect (curvature), then space (PV) must have a certain structure with specific parameters, including mass, absurdity, but the effect and the reaction is explicit, it is not an abstraction.

2. Any accelerated movement is identical to the gravitational field, then the reaction of the medium (space) to any movement of the object (inertia) is its coupler, although there are no sources of gravity at all. The action of gravity and inertia on space is identical and has a local character.

3. Uniform motion must correspond to a uniform gravitational field.

4. Gravity, if considered as a homogeneous gravitational field, under no circumstances can be detected (measured), the absolute GP value is not measurable.

5. Gravity in its pure form cannot be detected (measured), the effect of its manifestation occurs only in opposition to other types of forces. For example: gravity on Earth arises in opposition to the forces of e / m origin.

6. Gravity, acting on the body in its pure form, removes all the inertial features of the object. If you mentally imagine a variable gravitational field, for example, dig a through tunnel through the center of the Earth and create a vacuum, then its effect will cause the body to oscillate with an amplitude equal to the diameter of the Earth with a complete absence of inertia (reaction), i.e. the body will not feel these vibrations at all.

7. Talk about the fundamental nature of conservation laws in the framework of Einstein's theory can only be conducted in closed systems.

Why such a special place is given to gravity. One of the key points of the theory of inflation is the zero condition, the potential energy of the Universe is strictly equal to the total energy of all matter, g* M 2 universe */Rall L + M universe*From 2=0, which is basically true, then we simply have to somehow connect the total inertial energy of any body with the cosmic gravity. And the keys to this connection are not obvious, but are seen in the consequences of SRT and GR in relation to the Mach principle.

Mach, based on the idea of ​​the complete similarity of inertial and gravitational forces, argued that the nature of inertia lies in the influence of the entire mass of the Universe on a particular body. This means nothing else, if we remove all the matter of the Universe except for one body, then this body would have no inertia. The assumption is very controversial, at the moment it is not recognized by modern science, but on the other hand it would be very tempting to link together the gravity of the infinitely large (the Universe), with the inertia of the infinitely small, for example, EC. How could the gravitation of the cosmos create the inertia of bodies, the difficulty is that, according to SRT, the speed of propagation of gravitation cannot exceed the speed of light, but the Universe is huge, and the impact, i.e. inertia arises instantly, the quantitative side is not solved at all. And we state that Einstein's theory, recognizing the Mach principle, is not able to describe the mechanism of this influence. Let us pay attention to the following facts: 1. The GP of the Universe corresponding to the balance is always and everywhere equal to From 2, an amazing coincidence with the formula for the total energy of any inertial body. 2. The balance of the dynamics of the development of the Universe, means the equality always and everywhere of the BV force (hereinafter inertia) with the cosmic gravity. 3. The action of gravity and inertia on space are identical. 4. Gravity in its pure form removes all the inertial features of the object. All the four facts stated are a different form of interpretation of the very essence of Mach's principle, i.e. gravity without inertia does not exist and vice versa. Perhaps this is the key to unraveling the nature of inertia, if we find how the Mach principle is implemented, we will thereby create a single mechanism that controls the cycle of the Universe, therefore, in order to understand the infinitely large (the Universe), you need to deal with the infinitely small (Physical Vacuum) .

Physical Vacuum

PV is a carrier of all types of interactions and these processes are of an exchange nature (quantization principle), but there are nuances. The following problems are connected with PV: in QED it is completely unclear what ECs arise from and what they turn into, where indivisible electric charges go. In the BV theory - what exactly exploded, space is assumed, but for a physical description of this phenomenon, it is necessary at least to endow the void, some structure with certain parameters. And as a result, the question arises, what is the real mechanism of space curvature, under the influence of gravity. There is only one way, this is the materialization of space, and one of the keys to the approach to PV is the following. What is annihilation? We understand that this pair (particle-antiparticle) does not go anywhere and does not decay, they simply pass into a special bound state, i.e. into the PV structure with the lowest background energy, we will try to physically model this coupled structure. First of all, let's introduce the concept of gravitational charge (GC), all modern theories work only with charges and exchange quanta, and we have no reason to separate gravity from this fundamental principle, then what is it equal to. We return to the BV, in the Planck era, all ECs had the Planck mass, so we will assume that all ECs have a GZ equal to the Planck mass and this charge is indivisible, similar to an electric charge. But in nature there are no such charges as to be here. In the Planck era, the total energy of the EC M plank *S 2 was equal to the gravitational energy g*M 2 plank /Lplank between them, but these are the same conditions for the formation of a classical gravitational collapse (GC). So we will assume that the beginning of the BV was marked by the GC of each trio of leptoquarks, this can be interpreted as the separation of gravity (all gravitons) from matter (the first stage in the theory of supergravity), and then into particle-antiparticle pairs (relic radiation). The GB must be closed according to a linear law, this requirement follows from the principle of correspondence between quantum electrodynamics and the law of expansion of the Universe, knowing the physical essence of Planck's constant, we derive the GC formula in a purely logical way Mya=M plank *Lplank/ Lext. Then the PV is a special medium of collapsed states, let's call them vacuum cells (VC), the VC mass corresponds to the formula Mya=M plank *Lplank/ Lext, these are just those ideal EPs responsible for maintaining the balance of the Universe, endowing the PV with mass, this is the background positive energy, i.e. we materialized PV. Then what is the mass of a particle? This is a residual phenomenon of the asymmetry of the HA, i.e. imbalance of the work of gravitational forces with other types of interaction, and it is also closed according to a linear law. Then what about classical reality? The fact is that in a (naked) form, an EP cannot be considered, it is always surrounded by a cloud, everything with an expanding spatial step of the WY, and since the WL has mass, we get a classical transition to Newton's theory of gravity (will be considered below). The introduction of the Civil Code is a necessary measure, we will try to justify it.

1. Cosmology at the present stage has unexpectedly encountered the problem of dark matter, because The VC has a mass and, as it was found out above, are responsible for the balance of the Universe in total, then the role of the PV as a dark matter is quite visible.

2. Everything is true ES according to QED are point objects, then infinities appear in the calculations of their parameters. In QED, this problem is solved with the help of an artificial mathematical trick, renormalization. Perhaps true elementarity does not exist (there is nothing to collapse with, the GK covers exactly three leptoquarks, why only three is a separate issue), then each EP should have three faces, for example, an electron - a muon - a tau-lepton, also for quarks (b ,d,s), it is possible that EC is a spatial quantum rotation in the direction of motion, i.e. asymmetry in three directions of a composite object. A GC with a stable internal balance (which will be discussed later) removes infinities, i.e. at infinity there is a limit based on the balance of gravitational forces with other types of interaction.

By endowing any EC with a collapsed state and materializing the PV, we thus slightly open the way to understanding the mechanism of QED action, there is something to transform into and out of.

Dark matter and energy

Before the era of recombination, the Universe was a strictly balanced system, the energy of the relic with matter was strictly equal to the energy of the PV, i.e. There is one VY per relic. If dark matter is also introduced into this balanced system, in the form as it is presented by modern science, which makes up 23% of the total energy, then we will get catastrophic consequences, the Universe should have collapsed even then, something is wrong here. All the troubles began from the era of the separation of radiation from matter, i.e. scale factor change from Rall L(t) ~ t 1/2 on the Rall L(t) ~ t 2/3 , and this leads to an ever-increasing imbalance and, as a result, to an ever-increasing manifestation of dark energy. We concluded that the materialized PV is globally responsible for the balance of the dynamics of the development of the Universe, which corresponds to the stability of HP = From 2 throughout the arrow of time. All the matter of the Universe is in balance, practically does not play any role, the entire expansion function is taken over by the PV, and this radically changes the picture, the PV is a special form of matter practically unexplored, to some extent it is a graviton plasma with HP = From 2. Then we have a real argument not to change the scale factor during the recombination period, with Rall L(t) ~ t 1/2 on the Rall L(t) ~ t 2/3 and leave it unchanged. In this simple solution to the problem, the main stumbling block is the relic, the fact is that the observed energy of the relic = 3 0 K, and according to the scenario t 1/2 , should be 7-8 times higher, this is a powerful fact in favor of the generally accepted model of the Universe. The energy of the relic can be reduced to 3 0 K by assuming that the Universe continues to expand up to Lext= 10 -3 m according to the scenario t 1/2 , then its age should be about 200 billion. years, which is completely unacceptable. Everything seemed to be over, attempts to tame dark energy were a complete fiasco, and yet there is one clue. The substance, having separated from the relic, is the Friedmann model of the dusty expanding Universe, according to which space expands with a scale factor Rrelic(t) ~ t 2/3 and here comes the conflict. The relic and matter, having become free, began to control the law of the expansion of the Universe, i.e. space gravity. The PV is a strictly balanced material medium with local oscillations of the VY. Wouldn't it be better to consider: the relic expands according to the laws of thermodynamics, the Universe according to the law of balance. But the question arises: where does the energy of a cooler relic go and what does the relic expand into, if there is no “free space”, the relic expanded to Lrelic= 10 -3.3 m, space up to Lext= 10 -4.5 m. Let's try to approach this problem from the inside, i.e. locally. For any EP, the balance locally means the concentration of HE around the EP to balance (equality), both in terms of GB and in terms of energies. Very figuratively: the total energy of the chain from the VY, blurred to the background, is always equal to the energy of the EP, the same is true for the GB. In the era of separation of the relic, due to the equality of energies, one quantum corresponded to one VN, or the wavelength of the relic corresponded to the stretching step between VN. What is it leading to, in order for the relic to expand, we need the asymmetry of the VN and radiation in the proportion of 10 3.3 VN per quantum, then the cooling of the relic would just fill these vacancies. Back to BV, we have one white spot, this is the stage in units of length: Lplank- the operation of the theory of Supergravity, Lplank*Ö 137 - the action of the TVO (equality Lplank*Ö 137 follows from the condition g*M 2 plank *L 2 plank/ L 3 ext=e 2/Lext). At this stage, there is a separation of gravity from HWO, global deceleration begins, VY are formed, this is a non-quantum process. Further, the same process begins to interfere with an ever-increasing speed of HWO and on a segment, on a scale of length equal to Lplank*Ö 137 the velocities are aligned, but this process leads to the formation of Higgs particles rather than VY. The material was exhausted, all the HE and all the primary matter were formed, we got an acceptable asymmetry, which at the same time solved the problem with dark matter and energy, everything fell into place. If the Universe develops according to the scenario with the parameter t 1/2 , and all free radiation (relic, luminosity, redshift of the spectrum) expands according to the laws of thermodynamics with the parameter t 2/3 , then we naturally have inconsistencies, the compensation of which requires the introduction of dark energy and matter. Growing distortions began to manifest themselves during the period of complete recombination, the age of the Universe then was approximately 0.5 billion years. years. On the other hand, we look at the Universe, as if through a magnifying glass, i.e. distortions grow in proportion to the distance, summing up these two components, we get a maximum in distortions of 3-4 times at a distance of 7-8 billion km. years, which is consistent with observations.

Pioneer anomaly

Here it is appropriate to consider the version of the solution to the anomaly of the pioneers, what is its essence. Having gone beyond the solar system, both satellites began to experience deceleration equal to 10 -10 m / s 2, the nature of this phenomenon is unknown and interestingly, the same deceleration gives us the law of expansion of the Universe With*H Hubble\u003d 10 8 * 10 -18 \u003d 10 -10 m / s 2. What exactly happened, two satellites just went out of the solar system, physically, this means that the effect of gravity of the entire solar system is practically zero, i.e. it is no longer a connected system. In the stated theory, it is proved that in the expanding (contracting) Universe, the consequence of maintaining balance is the invariance of the stretching step (tie) between neighboring VCs, always and everywhere equal to Lplank. If we take into account that Lplank is the minimum fundamental length, then the process of stretching (stretching) at the micro level takes on a quantum character. Let us calculate this acceleration proceeding from the following considerations: according to QED, each VH must have an energy equal to Eya =hc/ Lext,= Myasince 2, then the VL being in place should oscillate with acceleration From 2/Lext, for a cycle time equal to With/Lext step changes to Lext-Lplank, then Da vya=From 2/Lext-From 2/Lext-Lplank= C 2*Lplank/L 2 ext\u003d 10 16 * 10 -35 /10 -9 \u003d 10 -10 m / and this value, based on the foregoing, is discrete. Three coincidences is something global, it means nothing else, the Universe at the present stage began to shrink. Then why not assume that the pioneers experience the effect of cosmological braking, we emphasize that such an effect applies only to unbound systems. True, the value of 10 -10 m / s 2 is very large, it is 10 30.5 orders of magnitude larger than the classical one, here the modern theory of gravity does not work, this value can be interpreted as follows: this is the local value of a particular VY and this discreteness can change both to a greater and to a lesser side Lext-/+Lplank, then the generalized average statistical acceleration can take on any minimum values, but most likely negative discreteness in the modern era is becoming widespread. It is possible that compression first occurs in massive objects such as a galaxy, and intergalactic space is not yet covered by this process, in any case, this version does not contradict physics. But consideration of this version has a completely different purpose, everything is directed to dark energy. Dark energy began to manifest itself about 7-8 billion years ago and dominates at the present stage, surface calculations show: due to accelerated expansion, we see only 1/7-1/8th of the Universe, and according to the theory 1/2, using the proportion by distances and time, we get the cosmological acceleration at the distance of the pioneers in the range of 10 -16 m/s 2 which is quite measurable. Then the pioneers, on the contrary, should accelerate, which is not true, the conclusion is: dark energy does not exist.

Consider another interesting problem, this is the coincidence of large numbers, we first write out the formulas: M universe/M nucle=10 80 ; Rall L/Lnucleus=10 41 ;

hc/ g* M 2 nucle \u003d 10 39 ; inaccuracies in the equalities are associated with the discrepancy between the entire baryon mass and the balance mass within 1/20, so there is reason to replace M nucle to balance M vya.

M universe/M vya=10 53 /10 -38 =10 91 ; Rall L/Lext=10 26 /10 -4.5 =10 30.5 ;

hc/ g* M 2 vya = 10 -26 /10 -11 *10 -76 =10 61 ; or (M universe/M vya) 2/3 =(Rall L/Lext) 2 = hc/ g* M 2 ya, we will prove these equalities based on the consequences of the balance of the Universe:

(M vya* n 3 /M vya) 2/3 =(Lext* n/Lext) 2 = g*M 2 plank *n 2 / g*M 2 plank

n 2 = n 2 = n 2

To understand the physical meaning of these equalities, we consider them in pairs.

M universe/M vya =(Rall L/Lext) 3 ; g*M universe/ R 2 all L= g*M vya*Rall L/L 3 ext ; g*M universe/ R 2 all L = g*M plank/L 2 ext; 10 -11 *10 53 /10 52 = 10 -11 *10 -8 /10 -9 ; 10 -10 \u003d 10 -10 m / s 2.

Consider the second pair: (M universe/M vya) 2/3 = g*M 2 plank /g* M 2 ya; M universe \u003d M 3 plank/M 2 ya;g*M universe/R 2 all L=M plank*L 2 ext/R 2 all L* L 2 plank;

g*M universe/R 2 all L= C 2/Rall L; 10 -11 * 10 53 / 10 52 \u003d 10 16 / 10 26; 10 -10 \u003d 10 -10 m / s 2.

And we again get from two independent equalities the same notorious acceleration of the same order. What does this mean, these formulas show the state of the Universe in the modern era and their equality says one thing, the Universe is at the point of transition from expansion to contraction, according to the arrow of time to the past and future, relations in equalities decrease and become equal in the Planck era. We see (gravitationally) exactly half of the universe. The dynamics of the development of the Universe is shown by the generalized formula From 2/R(t) all L. =g*M(t) all L/R(t) 2 universe, it follows from it, R(t) all L this is an increase (coverage) of causally related areas of space, due to From 2=g*M(t) all L/R(t) all L GP must take a constant absolute value and it is not measurable, then the GP of the Earth, the Sun at any point is also equal to From 2. In principle, GP as a scalar is a convenient mat / tool, by gravity we should mean a change in tension (acceleration), i.e. a change in gravity is gravity. Nature, on the example of the Pioneers, unexpectedly presented us with a hint of a completely new type of quantization through a measure of length, in relation to gravity this is a graviton, but with one serious problem: such gravity should be 10 30.5 orders of magnitude greater than the classical one, but there is a plus in this problem, this value is completely unmeasurable. But why it is not measurable, because we assume that this is a quantum quantity, is suggestive. But is there any connection here inertia + gravity = zero, i.e. zero version of the total energy in the theory of inflation, but at the micro level, separated in time through quantum uncertainty, in fact it is a quantum unmeasurable "strong" gravity with a QED mat/apparatus. Logically, if this condition is satisfied in relation to the entire Universe, then it must also be satisfied locally. Let's start this disassembly with the classical principle of quantization.

One-dimensionality in three-dimensional space

Perhaps we do not fully understand the physical essence of the principle of quantization, because there are no analogues, we have nothing to compare with, to present quantum phenomena. For example, how can one imagine the absorption of a volumetric three-dimensional e/m quantum, while absolutely completely, by a point object, let it be an electron, why a quantum of any length does not scatter, has no physical explanation in QED and is accepted as a postulate. The question lies much deeper, because all energy and matter are quantized, using the terminology - quantum gravity, we are obliged to quantize both space and time. First of all, we must clearly understand what the exchange process (interaction) means. EC cannot emit (absorb) quanta all the time; in order to radiate, it must first absorb and vice versa. Then it turns out that the EC can exchange with only one object, the process of interaction takes place in this direction and with this object for a certain period of time, at this moment there is no interaction with other objects, the EC "does not see" them. All this in sum mathematically at the moment means that the dimension is equal to one. In principle, this is a mathematical game; physically, at the quantum level, this is of fundamental importance. Quantization brings us to the seemingly absurd idea of ​​a one-dimensional impact, like a string (Superstring theory). In physics, scattering is completely absent only in one-dimensional processes, the whole process goes as if along a line, by attributing one-dimensionality to any quantum exchange process, we thereby mathematically substantiate the integrity of any quantum behavior. Then any EP is a point, the parameters of the probability of finding are determined by QED, a quantum is also a point but with a time parameter of influence, i.e. line. And what is very important, the lines (quanta), in a closed three-dimensional space, obeying the volumetric step in the distribution, do not intersect anywhere, therefore the quanta do not collide and do not scatter. One-dimensionality is the basis for maintaining order in the chaos of PV, for example: a massive body moves at a speed close to With, and we state the fact that, according to SRT, all processes slow down with exactly the same synchronicity. If this were not so, then we have a mechanism for measuring absolute speed. It would seem that moving in this chaos (PV) and observing incredible synchronism is absurd. Doesn't this mean that PV is an absolute order. From the world of quantum chaos, we get absolute order (television, cellular communications, etc.). Three-dimensional space is the only way to form the basic laws of nature, which are integrated from simpler one-dimensional exchange processes.

Here another problem arises, the most philosophically confusing, because it has no reasoned physical explanations. Its essence is as follows: What is a closed space (gravitationally) - this is when gravitational exchange particles (gravitons) left a specific point in all directions in a certain time sequence and they, in the same sequence, returned to the same point in all directions, those. space becomes finite. Einstein's SRT and GR showed the interrelationships of space - time - matter, this single whole (the Universe) does not exist without the other. Gravity is engaged in space tightening and is cumulative, then in the closed model of the Universe we get the effect of the influence of the source of gravity on itself, i.e. the arrival of gravity emitted in all directions, which bypassed the entire Universe again to the source, is a physical absurdity, in a closed Universe this can be called a violation of cause / effect relationships. This problem already imposes a limitation on the speed of propagation of gravity, no more than the speed of light, therefore, when simulating a closed Universe, we simply have to consider this problem. Let's pay attention that in the closed cyclic Universe from infinite mathematical constructions, there is a unique variant of the decision not overtaking or lagging behind, namely coincidence of the reason with a consequence. Then it is theoretically possible to model, taking into account SRT, such a Universe in which the beginning of the Universe (BV) and its collapse, i.e. a full cycle is equal in time to the passage of a graviton (quantum) at the speed of light from a particular point to the same point. This is a physically justified causally connected closed infinity. And interestingly, it does not need to be modeled, this is one of the solutions of the BV theory, for the case of an ideal balance of the dynamics of the development of the Universe at the expansion stage. We have already solved it, then the law of the expansion of the Universe should proceed according to the scenario with a scale factor R sat (t)~ t 1/2, i.e. all points began to expand among themselves at the speed of light, and as the layers were covered, the expansion rate fell in proportion to this coverage, as C/n. If we simulate for the same period of time the reverse process of the stage of compression, then we get a complete closed cycle of the Universe. BV divided the simultaneity of events according to SRT by the time of the complete cycle of the Universe. Such a model of the universe gives an unexpected interpretation to the philosophical problem of cause and effect. The event that is happening at the moment and the information about this event that has passed the entire cycle of the Universe (the previous cycle), in theory, should correspond to each other. And if we prove that the absolute order in relation to gravitons is preserved always and everywhere, then this fact of the meeting of the current event with the event of the previous cycle applies to any point of the Universe, to any moment of time. We are, as it were, synchronizing the cause from the previous cycle with the effect of a real event of the present time. All the time we must “see gravitationally” BV of the next N-th layer. For example, at the present moment, the gravitons of 10 30.5 -1 of the BV layer have reached us, and at the moment of collapse, the gravitons of the last layer will come, i.e. left the same point 2*13.7 billion years ago, which will produce BV (the next cycle of the Universe). Then the cause of the BV is the collapse of the Universe from the previous cycle, which will produce the BV. The universe in the cycle repeats itself, moreover, in exactly the same way. To some extent, this is an anthropic principle, i.e. the control of the course of history is information from the previous cycle, it looks like superfiction, but mathematically the problem is solvable. In a closed universe, the fundamental conservation laws work absolutely, energy is matter, as well as "information" does not go anywhere, the course of history cannot be changed. It seems that nature has polished itself. Here are the initial data for the construction of a cyclic universe.

Building a cyclic universe

An analysis of the current state of the Universe and all theoretical calculations speak of one thing, the Universe is on the verge between expansion and contraction, the criterion of which is the HP. The intensity of the gravitational interaction according to the classics is unusually small, but due to the superposition of the potentials of all sources of gravity (masses), we get the total GP = From 2 throughout space and throughout the arrow of time. We consider gravity as the interaction of gravitons with EF PV, i.e. we must quantize a homogeneous gravitational field with HP = From 2. At the time of the BV, we have two starting parameters of the gravitational field, which can be considered as graviton parameters, these are GP = From 2=g*M plank /Lplank remaining stable throughout the arrow of time and acceleration From 2 /Lplank=g*M plank /L 2 plank, the graviton as an exchange particle must obey all the laws of the development of the Universe, in particular the law of cosmological expansion, for example, the graviton that has come down to us from the nth layer has an action n times less, then in the modern era the action of the graviton is From 2/Lext =g*M plank /Lplank* Lext= 10 21 m/s 2 ! According to the classics, this formula has the form g*M ext /L 2 ext\u003d 10 -40 m / s 2, which does not fit at all with the GP of the Universe equal to From 2. And we come to a striking result, the background unmeasured energy of the graviton is comparable to e/m quanta by interactions. We, as it were, turn the graviton from a faceless state into an unmeasured monster. Now it becomes clear what force, according to QED, causes the VC to oscillate with acceleration From 2/Lext- this is a graviton, then the question arises whether gravity, as a flow of gravitons of different energies, is the primary source of all quantum phenomena (virtuality, fluctuations), i.e. reason. And most importantly, we have a real tool for the physical description of the consequences of SRT, GR and the Mach principle. How to combine this incredibly large value with actually observed gravity, how to be with the classics, in the future we will see how the correspondence principle is created, but first we will consider where the very mechanism of the cyclicity of the Universe is laid.

Let's ask ourselves this question, what does the balance of the dynamics of the development of the Universe at the micro level mean, this is the equality of the gravitational parameters of the graviton with the inertial properties of the VC, and now let's combine these actions - counteractions into a single process. What we will get is an oscillation at the level of VY, but a special one, due to the expansion with different shoulders. Let's calculate this difference, we have already performed this operation, but from other positions:

Vext= C/ n=10 -23 m/c, text= Lext/ C\u003d 10 -12 s, then Lasim= text* Vext\u003d 10 -35 m \u003d \u003dLext/ n= Lplank is a constant, which is in full agreement with Hubble's law Vext / Lext=10 -23 /10 -4.5 =10 -18.5 sec -1 =H Hubble

F asim =g*M ext /Lext\u003d 10 -45 m 2 / s 2, which corresponds V 2 ext

Then the graviton, passing through each VH, changes the structure of space, i.e. an asymmetry arises in the oscillation arms, always and everywhere equal to Lplank, which corresponds, on the one hand, to the dynamics of the expansion, and, on the other hand, to the gravitational balance between the WLs. In other words, the graviton slows down the expansion dynamics, compresses the space one-dimensionally. You can say so, the graviton supports itself (increases) by reducing the rate of expansion of space. There is a transition of the kinetic energy of the expansion into the potential energy of the graviton. Then what caused the phase of a smooth transition. The process of ever slowing down balance expansion would be infinite if not for the mass of all ECs. In order for the countdown to work, the graviton, having increased due to the masses, during the expansion stage equal to 13.7 billion years, must change the difference in oscillation from positive to negative, by only Lplank= 10 -35 m. At an early stage, the relic and neutrinos made the main contribution, closer to the modern era, all other EPs were added to them, i.e. EC masses play the role of a “soft damper” in the transition phase. Then the mass of all EPs is responsible for the balance of the dynamics of the development of the Universe, and the mass of all EPs is responsible for the time interval of the cycle. For the entire cycle of the Universe, each graviton, interacting 10 30.5 times, first expands the VC oscillation in the given direction up to L 0 = 10 -4.5 m (expansion stage), and then compresses to Lplank=10 -35 m (compression stage). And since there are at least 10 30.5 of them in the ring, then for the entire cycle, the expansion and contraction of the entire ring will be 10 26 m and 10 -4.5 m, respectively. It is interesting how the law of universal gravitation is built from these positions. According to the theory, any EC during the cycle is equal to With/Lext\u003d 10 -12 sec. makes the space coupler proportional to its mass, for the nucleon we get:

M nucle/M vya=10 11.5 ; Vnucleus=Lplank*M nucle/M vya *tcycle= 10 -35 *10 11.5 /10 -12 =10 -11.5 m/s then:

and nucle =V 2 nucleus/Lnucleus\u003d 10 -23 / 10 -15 \u003d 10 -8 m / s 2, which corresponds to the classics:

g*M nucle/L 2 nucleus\u003d 10 -11 * 10 -27 / 10 -30 \u003d 10 -8 m / s 2:

With regard to our planet, 10 17 pieces fit into the diameter of the Earth. nucleons, then their total effect will create an acceleration equal to:

a earth1 = a null *Nnucleus\u003d 10 -8 * 10 17 \u003d 10 9 m / s 2, this acceleration corresponds to the neutron Earth (the distances between nucleons are Lnucleus\u003d 10 -15 m), then we push the nucleons apart to the size of the average density equal to LWednesdays=10 -11 m, i.e. by four orders. In this case, the strength of gravitons does not change, only the intensity changes in proportion to the square of the separation, then:

a earth2 = a earth1 *N 2 section\u003d 10 9 * 10 8 \u003d 10 1 m / s 2, which coincides with the classics.

Rnrnrn rnrnrn rnrnrn

Only one constant is involved in this construction Lplank, no field forces are applied, we have performed only one-dimensional operations. While one thing is clear here, the gravitational force (the force of a single graviton) does not depend on the distance and is cumulative, only the intensity changes. Let us note right away that the meaning of gravity and gravitation is fundamentally changing here, the fact is that gravity and gravitation, having a single nature of origin, are still different things. HP= From 2, it is not possible to measure the parameters of gravitons (gravitation in total), in fact this is a theory of unmeasured quantities. What is the fundamental difference between the classic and the proposed version of gravity. The classic under gravity means the action (imposition) of all sources of gravity simultaneously on each point in space. According to the theory, it is as if scanning by gravitons of each point of space, where the enhanced gravitons correspond to the masses of the sources, and the intensity corresponds to the distances to the sources. In sum, this is the same thing, but the physical meaning is completely different. It is this mechanism of interaction of the graviton with VY, EP that explains the meaning of the geometrization of gravitation. Gravity is the integration of all one-dimensional screeds of space by gravitons throughout the volume. The implementation of the proposed variant of the cyclicity of the Universe requires a new approach to the physics of inertia, as an absolute equality of the inertial properties of all VH, SP with gravity both locally and globally, otherwise this entire system loses stability. We must really prove the stability of such behavior of the PV and such a mechanism was found, this is symmetry in gravity and the quantum principle of motion.

Symmetry in gravity

Having materialized the space, it becomes clear what specifically exploded, but it remains a mystery what caused the BW, the emergence and further maintenance of the balance. We have to introduce a new ephemeral type of force with incredible parameters, this force, having realized BV, further strictly balances with the gravitation of space, both at the local level and on the scale of the entire Universe, i.e. somehow adjusts to the dynamics of expansion. This is where the mechanism for solving the Mach principle will help us. The action of gravity and inertia on space is identical, even the very equality suggests the idea whether the force of inertia is an integral part of gravity, which it looks like. Action-reaction, gravity-inertia, and in total the equality of gravitational and inertial mass, i.e. gravity and inertia are integral components of the gravitational interaction, then gravity is symmetrical. Here are four more arguments in favor of symmetry. 1. Gravity in this form clearly fulfills the zero conditions of the total energy, both locally and globally. Roughly speaking, without a graviton as a carrier of inertia and gravity, VY, EP is left with nothing. 2. Gravity is not measurable, because it is symmetrical, then the primary source of Planck's constant, as a carrier of inertia, should be gravity. 3. If we take the picture of the expansion of the Universe and scroll back, as it were, up to BV, then we will get the purest mechanism for the formation of the phase of compression of the Universe and its collapse, i.e. BV and collapse are symmetrical. Then we can answer the question without introducing any new force. Who has implemented locally BV - Graviton, who has implemented local collapse - Graviton, there are 10 91.5 such areas, the same number of gravitons, in total it is the whole Universe. 4.VN is a stable structure and at the same time VN is the source of the birth of any forms of EP, i.e. the GC is somehow overcome, which contradicts the physical essence of the collapse itself. This is where symmetry in gravity will help us, allowing us to divide the GC into two parts. In the scientific literature, it is proved that only three-dimensional space can really exist (meaning open dimensions), and how many closed ones are already variations of theories. Three generations of fundamental fermions (three pairs of quark + lepton) - three dimensions of space, is there a connection here. The geometry of the motion of gravitons can be represented as a ring of chains of VL with the dimensions of the Universe, in which at least 10 30.5 units move. gravitons. In the Universe as a whole, a strict number of gravitational rings, there are at least n 2 =10 61 , these rings are evenly distributed in the volume of the Universe with a certain volume step equal to 10 -4.5 m. The rings should not intersect, this requirement is necessary to comply with the order of the PV structure together with gravitons. The construction of the simplest figure (mathematically), where these rings do not intersect, is a three-dimensional ball. In the four-dimensional space of these rings there should be n 3 , if we assume that three types of fundamental fermions should correspond to three dimensions (recall, each EP has three faces), then the VY should be a three-dimensional object. The fourth dimension requires a fourth pair of fermions, but since The universe in this situation is inoperable, there can be no fourth pair. It remains for us to model the VL for a three-dimensional space, as the main building block in the construction of the PV. Then the EL, consisting of two bricks with three elements in each, represents a structure of the type:

Let's analyze this structure in more detail.

We assumed earlier that the VR is a closed state of the GB according to the simple law Mvya=M plank *Lplank/ Lext. Now the question arises about the stability of this state. We really have three directions, in each direction there are elements of the VY (leptoquarks) with GBs in total equal to M plank and a total electric charge equal to e , and there are six of them. The balance of this system leads to the following theoretical conclusions: there should be two types of GBs “+” and “-”, but unlike electric ones, like ones attract, unlike ones repel. For example: all EPs are endowed with GP "+" and, accordingly, all anti EPs have GP "-". Three leptoquarks are in the GC due to like GBs, and the compensation balance is formed due to the electromagnetic repulsion of like charges and occurs at Lext= Lplank/ Ö 137, (According to the TVO, at these distances, the electroweak and strong interactions are combined). The other three anti-leptoquarks are in balance for the same reason. Then, taking into account the closure of the GB and the symmetry in gravity, the mechanism of annihilation and the creation of EPs becomes clear. Symmetry in gravity clearly explains the meaning of inertia and provides a return mechanism in oscillation. The graviton is the carrier of both inertia and gravity and physically substantiates the entire process of the Universe's cyclicity. We may no longer need an inflationary stage in the development of the Universe. The fact is that when the Universe collapses, the velocities between neighboring layers approach the speed of light, and this leads, as it were, to the merging of the graviton with the VH and, accordingly, to a decrease in the influence of gravitational forces between the VH. Gravity, having generated a collapse, buried itself, the BV scenario began, and this is very similar to the phase transition of a false vacuum into a true one. In addition, the inhomogeneities necessary for the formation of galaxies are automatically laid down by the collapsing Universe itself. Here, the solution of another problem is greatly simplified. In the theories of unification of all interactions and matter, in particular Supergravity, to compensate for positive infinities that arise during renormalization from graviton loops, eight new EPs with spin 3/2 such as gravitino, photino, gluino, etc. are introduced. creating negative infinities. At the head of this eight is a graviton with spin = 2, symmetry in gravity automatically creates a compensation mechanism, and the servants of these exotic particles can be abandoned.

Quantum principle of motion

PV is the foundation in the construction of the entire QED and at the same time is not acceptable for the creation of SRT. How to reconcile these mutually contradictory positions on the question of PV. The effects of SRT GR, quantum effects, the problem of aether force us to rethink the concepts of space, time and the very essence of motion. The fact is that the ether is an undeniable reality (the supporters of the ether are right), but all experiments within the framework of SRT indicate the opposite, there is no ether (the opponents are right). What problem is jointly solved is the principle of movement in the environment and without the environment. And what if we refuse the source of the dispute, not the ether is a consequence, but the very essence of the movement and thereby satisfy both the supporters and opponents of the ether. Let us assume that there is no movement as such in the PV, there is only a transfer of state, as it can be imagined. Let's use one of the properties of PV - virtuality. Let us assume that EC is a vacancy for PV, i.e. an incomplete VY always tending to be filled with PV elements (virtual annihilation), while creating a similar vacancy but at a different point, a movement effect is created, somewhere there is an analogy with semiconductor holes. In fact, we are not inventing anything new here, this principle of movement is not explicit, but is visible in QED. The movement of the EP is identical to its stay in a uniform gravitational field, which is equivalent to the exchange process between the EP-VY directly gravitons with energy in accordance with the achieved speed. Then dimension and time arise only during exchange processes, no matter real or virtual, such as there is interaction in a given direction, there is also a mechanism for measuring dimension (orientation) and time. These requirements follow from the principle of correspondence between SRT and the concept of the physical essence of time. Moving at the speed of light, the EC “has a connection” with only one graviton, with which it moves, but since the gravitons do not intersect, all exchange processes and time, in accordance with SRT, are suspended, and so the EC passes into the absolute order of the PV. EC becomes a dead object, its state always corresponds to the last interaction, this fact indirectly manifests itself in Aspek's experiment. Two EC being in a bound state after that, flying apart in different directions with a speed With retain the memory of the bound state until they are legalized, i.e. The measurements made over the ES do not depend on the length of their run-up, then the correlation corresponding to the start of the run-up is transferred to the moment of measurements. The graviton is the carrier of gravity and inertia, combining this innovation with the quantum principle of motion, we can more reasonably state: the true cause of all without causal events is the graviton, this is a purely quantum effect.

Gravity Laser

The material presented above may give rise to various judgments. Without an experiment (confirmation), any theory can be generated and the idea of ​​setting up an experiment was found, you can call it like a gravitational laser. We take an extra-long and ultra-thin massive rod and place an EC with special measuring equipment along its direction. Thus, we create a local area of ​​influences coming out of the enhanced gravitons rod on the EP, special equipment fixes the EP fluctuations. Let us excite a mechanical wave process in the rod, i.e. we change the local region of enhanced gravitons, in time with the wave in the rod, which is fixed by the equipment. If the theory is true, for the first time we have a real mechanism for measuring the speed of propagation of gravity.

Literature

1. P. Davis Superpower. Ed. Peace 1989.

2. V.L. Yanchilin Secrets of Gravity M. New Center 2004.

3. A.D. Chernin Cosmology: The Big Bang. Ed. Vek-2 2005.

4. Magazine: Earth and Universe 2002 №5 Strange acceleration of the Pioneers.

5. V.A. Rubakov lecture: Dark matter is dark energy in the Universe.

I offer cooperation in creating a single project within the framework of physical realism.