Ray tracing in real time. DirectX Raytracing: real-time ray tracing

Not too long ago, 4A Games, the creator of amazingly realistic Metro games, released a video using Nvidia's RTX technology using METRO: EXODUS as an example. This graphical innovation is a big and confident step forward in ray tracing. But what does all this mean?

Behind the veil of secrecy

Let's start from the very beginning. Firstly, ray tracing rendering is one of the basic types of visualization that is used in films and various types of design: from industry to architecture. That is, what you see on websites as photographs of equipment is exactly a 3d render.

The essence of the technology is that the computer models the physical behavior of light by calculating the trajectory of conditionally individual photons of light, that is, if a beam falls on an object, then it is either refracted in it or reflected from it at one angle or another. The result is a certain trace of this ray of light, hence the name "ray tracing".

This computer generated image by Enrico Cerica using OctaneRender shows rays, shadows and reflections on a complex floor surface.

The only problem is that a lot of rays are needed and for each of them it is necessary to repeatedly calculate each collision of the ray with obstacles. It is, in fact, a simple mathematical problem. First, you need to calculate where the beam hits an existing object, that is, calculate the collision, then, based on the specified properties, you need to make further mathematical transformations.


Simplified ray tracing scheme

For example, there is a matte surface with a certain given conditional roughness, but not absolutely rough, and the beam deviates from it with a certain probability by some angle different from the angle of incidence. It should be taken into account that if the object has the property of infinitely smooth, then the angle of incidence is equal to the angle of reflection. If the properties of the surface speak of haze, then mathematically this is realized by the deviation of the angle of reflection from the angle of incidence.

In life, this is true, the surface is almost always not completely smooth. Therefore, when light hits one point or another, it is reflected relative to the surface location, which can be somehow rotated relative to the plane that seems to us even, and the adjacent beam is reflected in a completely different direction. Thus, it becomes quite clear that there is no point in making object models overly polygonal, so the irregularities are set by the properties of the surface. The result is similar to the actual scattering of light from matte surfaces.

Now in games they use an object that looks like it is smeared with something like icing. Undoubtedly, everyone has seen strange walls and floors in games that look like they are smeared with some kind of slime. Well, you don't have to do this with ray tracing - surfaces can scatter light quite naturally. This is perfectly shown in the demo, there are a number of areas from the most mirrored to the most matte.

It is especially noticeable that in matte areas the reflection strongly depends on the proximity of the object to the surface. That is, the farther the object is from the surface, the more it becomes blurred. This is an important property that we do not even notice in life, although it exists.

But the most important thing is the shadows. There is nothing worse than shadows in any game.

These are usually just projections of objects that have sharp, unnatural edges. However, there are better, by game standards, shadows. These are soft options with a transition line, that is, shadow and penumbra.


To create soft shadows or diffuse reflections (like the ones you see in brushed metal, for example), more advanced ray tracing techniques are needed.

The only problem is that it doesn't work that way in real life. If the light source is not a point and not infinitely distant, then the amount of penumbra depends on the ratio of the distance of the light source from the object and the distance of the object from its shadow. That is, the edging of the penumbra in its different places can be wider or narrower, especially when it comes to large light sources.

For example, the light from a window in cloudy weather gives such soft shadows that if you place an object much smaller two meters from this window, you can see a clear shadow and partial shade on its base, and there may not be a shadow at all from the upper part of the object, and the penumbra will not have clear boundaries. In games with traditional rasterization, this does not occur.

Ray tracing is not difficult, but there are a lot of rays, and, unfortunately, at the moment it is not possible to do everything in real time. The fact is that in life the rays diverge in completely different directions. It would be ideal if the incident beam is divided into an infinite number of beams, the total brightness of which would depend on the property of the reflecting object and the initial brightness of the incident beam.

To reduce the load, you can limit the number of rays, the number of collisions, but these restrictions lead to the fact that in the picture you get scanty pieces of shadow and unnaturally bright spots from light sources. That is, an insufficient amount of data leads to the appearance of noise, and the whole difficulty lies in the fact that it is impossible to calculate the scene once and then change in it only what changes from frame to frame, since any moving object changes all the routes of all rays. From each movement of the camera and objects, you need to “rebuild” the entire scene again, which is why films are created by render farms and numerous servers that continuously, for several months, render graphics. But, unfortunately, in real time in games this is still impossible to do.

Therefore, the question arises, how did Nvidia and partners get out of this situation: what did they sacrifice in order to achieve real-time tracing?

Two halves of one whole

If you carefully follow how the picture is rendered in stages, you can see that somewhere after the fifth integration, the shadows and the light themselves are already barely noticeably changing. It becomes clear where, what and how it will look in the final form. For this, in principle, real-time software is needed so that you can twist the light sources, understand where there will be some glare and then start the final rendering. It remains to be seen how the final frame can be understood from the muddy picture. In fact, the computer first models the original noisy picture, then analyzes it and, based on the data obtained, draws another one - the final one. As practice has shown, this approach is simpler in terms of resource costs.

In fact, a progressive leap in the development of real-time tracing is the creation of algorithms that allow us to evaluate a picture from a noisy state and draw it up to normal. This is the key innovation. Everything else was widely used before. Many renderers have GPU rendering and OpenGL plugins supported by any OpenGL compatible graphics card.

Today it is claimed that this noise reduction technique only works on tender cores in future cards from Nvidia. But in fact, this technology was shown in large quantities only now, and it appeared, apparently, last year, since in October, at one of the events, Unity showed this very technology for removing noise in real-time tracing.

The trick is that not all objects participate in ray tracing as they should. In this regard, we will touch on the topic of global illumination - the most resource-intensive tracing mechanism. In life, any object on which light falls reflects part of this light. For example, if you point a light source at a green wall, then all the lighting will turn green, because the green wall does not absorb light well.


Reflection of light rays from a surface

There was no such effect in the demo. Instead of reflecting light and changing the light pattern of the frame depending on the light source, the entire space was filled with bright pictures, which makes the light seem dynamic. In fact, the developers did not apply ray tracing to such details as haze and flames.

The fact is that the rays of light must be calculated regardless of whether they fall on objects or not. That is, adding a large number of lights is a rather difficult task for tracing calculations. In addition, there are no objects in any techno demo that imitate transparent objects.

When will we see the benefit?

Based on the criteria described above (cloudiness of the picture and tracing of not all objects), we can say that the graphics in games cannot yet look as detailed and realistic as in the movies.

However, there is certainly progress. Firstly, reflections are now easier to do, no need to create shadow and light maps - all this is solved by tracing. Secondly, at least some semblance of scattered reflections appeared. Thirdly, the lighting and shadows of objects have improved significantly. All together, it can be argued that this can be considered a key technology in games that will not give up its positions in the next ten years.

Text: Alexey Kharitonov, QA, Bytex

Back trace method

Method back trace rays allows you to significantly reduce the search for light rays. The method was developed in the 80s, the works are considered to be fundamental whitted and Kay. According to this method, ray tracing is performed not from light sources, but in the opposite direction - the point of observation. In this way, only those rays that contribute to the formation of the image are taken into account.

Consider how you can get a bitmap image of some three-dimensional scene using the back tracing method. Suppose that the projection plane is divided into many squares - pixels. Let's choose a central projection with the vanishing center at some distance from the projection plane. Let's draw a straight line from the center of the descent through the middle of the square (pixel) of the projection plane. This will be the primary back trace ray. If the straight line of this ray hits one or more scene objects, then select the nearest intersection point. To determine the color of an image pixel, it is necessary to take into account the properties of the object, as well as what kind of light radiation falls on the corresponding point of the object.

If the object is specular (at least partially), then we build a secondary ray - an incidence ray, considering the previous, primary traced ray as a reflection ray. Above, we considered specular reflection and obtained formulas for the reflected ray vector given the normal and incident ray vectors. But here we know the vector of the reflected beam, but how to find the vector of the incident beam? To do this, you can use the same specular reflection formula, but specifying the required incident ray vector as the reflected ray. That is, the opposite reflection i.

For an ideal mirror, it is then sufficient to trace only the next intersection point of the secondary beam with some object. A non-ideal mirror greatly complicates tracing - it is necessary to trace not one, but many incident rays, and take into account the contribution of radiation from other objects visible from a given point.

If the object is transparent, then it is necessary to build a new ray, one that, when refracted, would give the previous traced ray. Here you can also use reversibility, which is also valid for refraction.

If an object has the properties of diffuse reflection and refraction, then, in the general case, as for a non-ideal mirror, it is necessary to trace the rays coming from all available objects. For diffuse reflection, the intensity of the reflected light is known to be proportional to the cosine of the angle between the ray vector from the light source and the normal. Here, the light source can be any object visible from a given point, capable of transmitting light energy.

When it turns out; that the current back tracing ray does not intersect any object, but goes into free space, then the tracing for this ray ends.

Rice. 14.1 An example of reverse ray tracing.

Ray tracing in the form in which we considered it here, although it reduces the enumeration, does not allow getting rid of the infinity of the number of analyzed rays. Indeed, this method makes it possible to immediately obtain a single primary backtracing ray for each image point. However, there can already be an infinite number of secondary reflection rays. So, for example, if an object can reflect light from any other object, and if these other objects are large enough, then which points of the emitting objects should be taken into account to construct the corresponding rays, for example, in diffuse reflection? Obviously all points.

How ray tracing works:

1. An imaginary ray is emitted from the observer's eye through some screen pixel and its path is tracked until it intersects the object.

2. From the first point of intersection of the beam with the sphere, a reflected beam is emitted. Let the surface be opaque. Then we do not draw refracted rays. We designate a shadow ray from the intersection point to the light source. Since this ray does not cross another opaque surface, the light source directly affects the intensity of illumination at a given point.

3. Let the reflected beam intersect another object, this time a translucent sphere that reflects and transmits light. Reflected and refracted rays are emitted along with a shadow ray going towards the light source. The transmitted beam changes its direction before and after entering the sphere in accordance with the effect of refraction.

4. Let the point at which the ray crosses the sphere not be directly illuminated by the source, because the path of the shadow ray is blocked by an opaque surface. If the scene contained multiple light sources, then the shadow rays would have to be cast into each of them.

5. The influence of all rays generated explicitly or implicitly using the initial ray is summed up and the result determines the RGB value of this point.

At Gamescom 2018, Nvidia announced a series of Nvidia GeForce RTX graphics cards that will support Nvidia RTX real-time ray tracing technology. Our editors figured out how this technology will work and why it is needed.

What is Nvidia RTX?

Nvidia RTX is a platform that contains a number of useful tools for developers that open access to a new level of computer graphics. Nvidia RTX is only available for the new generation of Nvidia GeForce RTX graphics cards based on the Turing architecture. The main feature of the platform is the possibility real time ray tracing(also called ray tracing).

What is ray tracing?

Ray tracing is a feature that allows you to simulate the behavior of light, creating believable lighting. Now in games, the rays do not move in real time, which is why the picture, although it often looks beautiful, is still not realistic enough - the technologies currently used would require a huge amount of resources for ray tracing.

This is corrected by the new Nvidia GeForce RTX graphics card series, which has enough power to calculate the path of the rays.

How it works?

RTX projects rays of light from the player's (camera's) point of view onto the surrounding space and calculates in this way where what color the pixel should appear. When the beams hit something, they can:

  • Reflect - this will provoke the appearance of a reflection on the surface;
  • Stop - this will create a shadow on the side of the object that the light did not hit
  • Refract - this will change the direction of the beam or affect the color.
The presence of these functions allows you to create more believable lighting and realistic graphics. This process is very resource-intensive and has long been used in the creation of film effects. The only difference is that when rendering a frame of a movie, the authors have access to a large amount of resources and, it can be considered, an unlimited period of time. In games, the device has fractions of a second to form a picture, and the video card is used, most often, one, and not several, as in the processing of motion pictures.

This has prompted Nvidia to introduce additional cores in GeForce RTX graphics cards that will take on most of the workload, improving performance. They are also equipped with artificial intelligence, whose task is to calculate possible errors during the tracing process, which will help to avoid them in advance. This, as the developers say, will also increase the speed of work.

And how does ray tracing affect quality?

During the presentation of video cards, Nvidia showed a number of examples of how ray tracing works: in particular, it became known that some upcoming games, including Shadow of the Tomb Raider and Battlefield 5, will work on the RTX platform. This function, however, will be optional in the game, since one of the new video cards is needed for tracing. Trailers shown by the company during the presentation can be viewed below:

Shadow of the Tomb Raider, which will be released on September 14 this year:

Battlefield 5, which will be released on October 19:

Metro Exodus, which is scheduled for release on February 19, 2019:

Control , which has no release date yet:

Along with that, Nvidia, what other games will get ray tracing.

How to enable RTX?

Due to the technical features of this technology, only video cards with Turing architecture will support ray tracing - the devices currently available cannot cope with the amount of work that tracing requires. At the moment, the only video cards with this architecture are the Nvidia GeForce RTX series, models of which are available for pre-order from 48,000 to 96,000 rubles.

Are there analogues for AMD?

AMD has its own variant of real-time ray tracing technology, which is present in their Radeon ProRender engine. The company announced its development back at GDC 2018, which took place in March. The main difference between the AMD method and Nvidia is that AMD gives access not only to tracing, but also to rasterization, a technology that is now used in all games. This allows you to both use tracing, getting better lighting, and save resources in places where tracing will be an unnecessary load on the video card.

The technology that will run on the Vulkan API is still in development.

As Nvidia stated during its presentation, the development of RTX technology will significantly improve the graphics component of games, expanding the set of tools available to developers. Nevertheless, it's too early to talk about a general graphics revolution - not all games will support this technology, and the cost of video cards with its support is quite high. The presentation of new video cards means that there is progress in graphical details, and over time it will grow and grow.

INTRODUCTION

There are several methods for generating realistic images, such as forward ray tracing (photon tracing) and back ray tracing.

Ray tracing methods are considered to be the most powerful and versatile methods for creating realistic images today. There are many examples of the implementation of tracing algorithms for high-quality display of the most complex three-dimensional scenes. It can be noted that the universality of tracing methods is largely due to the fact that they are based on simple and clear concepts that reflect our experience of perceiving the world around us.

The objects around us have the following properties in relation to light:

radiate;

reflect and absorb;

pass through themselves.

Each of these properties can be described by some set of characteristics.

Radiation can be characterized by intensity and spectrum.

The property of reflection (absorption) can be described by the characteristics of diffuse scattering and specular reflection. Transparency can be described by intensity attenuation and refraction.

Rays of light emanate from points on the surface (volume) of radiating objects. Such rays can be called primary - they illuminate everything else. Countless primary rays emanate from radiation sources in various directions. Some rays go into free space, and some fall on other objects.

As a result of the action of primary rays on objects, secondary rays arise. Some of them end up on other objects. So, repeatedly reflected and refracted, individual light rays come to the point of observation. Thus, the image of the scene is formed by some set of light rays.

The color of the individual points of the image is determined by the spectrum and intensity of the primary rays of the radiation sources, as well as the absorption of light energy in objects encountered in the path of the corresponding rays.

Direct implementation of this ray model of imaging is difficult. You can try to build an image construction algorithm in the specified way. In such an algorithm, it is necessary to enumerate all primary rays and determine those that hit the objects and the camera. Then iterate over all secondary rays, and also take into account only those that hit the objects and the camera. Etc. This algorithm is called direct ray tracing. The main disadvantage of this method is a lot of unnecessary operations associated with the calculation of rays, which are then not used.

1. REVERSE RAY TRACING

This work is devoted to this method of generating realistic images.

The method of reverse ray tracing can significantly reduce the search for light rays. The method was developed in the 80s, the works of Whitted and Kay are considered fundamental. According to this method, ray tracing is performed not from light sources, but in the opposite direction - from the point of observation. In this way, only those rays that contribute to the formation of the image are taken into account.

The projection plane is divided into many pixels. Let's choose a central projection with the vanishing center at some distance from the projection plane. Draw a straight line from the vanishing center through the middle of the projection plane pixel. This will be the primary back trace ray. If this ray hits one or more scene objects, then select the nearest intersection point. To determine the color of an image pixel, it is necessary to take into account the properties of the object, as well as what kind of light radiation falls on the corresponding point of the object.

If the object is specular (at least partially), then we build a secondary ray - an incidence ray, considering the previous, primary traced ray as a reflection ray.

For an ideal mirror, it is then sufficient to trace only the next point of intersection of the secondary beam with some object. An ideal mirror has a perfectly smooth polished surface, so one reflected beam corresponds to only one incident beam. The mirror can be darkened, that is, absorb part of the light energy, but the rule still remains: one beam falls - one is reflected.

If the object is transparent, then it is necessary to build a new ray, one that, when refracted, would give the previous traced ray.

For diffuse reflection, the intensity of the reflected light is known to be proportional to the cosine of the angle between the ray vector from the light source and the normal.

When it turns out that the current back tracing ray does not intersect any object, but goes into free space, then the tracing for this ray ends.

In the practical implementation of the back tracing method, restrictions are introduced. Some of them are necessary in order to be able to solve the problem of image synthesis in principle, and some restrictions can significantly increase the speed of tracing.

Restrictions when implementing tracing

Among all types of objects, we will single out some, which we will call light sources. Light sources can only emit light, they cannot reflect or refract it. We will consider only point light sources.

The properties of reflective surfaces are described by the sum of two components - diffuse and specular.

In turn, specularity is also described by two components. The first (reflection) takes into account reflection from other objects that are not light sources. Only one specularly reflected ray r is built for further tracing. The second component (specular) means highlights from light sources. To do this, rays are directed to all light sources and the angles formed by these rays with the specularly reflected back-tracing ray (r) are determined. In mirroring, the color of a surface point is determined by the native color of what is being reflected.

With diffuse reflection, only rays from light sources are taken into account. Rays from specularly reflective surfaces are IGNORED. If a beam directed at a given light source is covered by another object, then this point of the object is in shadow. In diffuse reflection, the color of an illuminated point on a surface is determined by the intrinsic color of the surface and the color of the light sources.

For transparent (transparent) objects, the dependence of the refractive index on the wavelength is not taken into account. (Sometimes transparency is generally modeled without refraction, that is, the direction of the refracted beam t coincides with the direction of the incident beam.)

To take into account the illumination of objects by light scattered by other objects, the background component (ambient) is introduced.

To complete the trace, a limitation on the number of iterations (recursion depth) is introduced.

Conclusions on the back trace method

Advantages:

The universality of the method, its applicability for the synthesis of images of rather complex spatial schemes. Embodies many laws of geometric optics. It's just a variety of projections.

Even truncated versions of this method allow obtaining quite realistic images. For example, if we limit ourselves to only primary rays (from the projection point), then this results in the removal of invisible points. Tracing already one or two secondary rays gives shadows, specular transparency.

All coordinate transformations are linear, so it's easy to work with textures.

Disadvantages:

Problems with modeling diffuse reflection and refraction.

For each point of the image, many computational operations must be performed. Tracing is one of the slowest image synthesis algorithms.

2. DESIGN PART

Algorithms.

Reverse ray tracing.

Rice. 1 - Block diagram of the recurrent algorithm for reverse ray tracing

ray tracing programming language

In this program, the backtrace algorithm is implemented in a recurrent manner. The function for calculating the intensity of the primary ray recursively calls itself to find the intensities of the reflected and refracted rays.

Algorithm:

To calculate the color of each pixel in the framebuffer, the following steps are performed:

Find the coordinates of a pixel in the world coordinate system.

Find the coordinates of the primary ray.

Starting the function for calculating the intensity of the primary beam.

Find the intersections of the ray with all the primitives in the scene and select the nearest one.

If the intersection is not found, then the ray has gone into free space.

To calculate the color, we take the total intensity equal to the background intensity. Go to step 12. If an intersection is found, go to step 6.

We calculate the "local" color intensity of the object to which the intersection point belongs. By "local" intensity is meant the intensity taking into account the intensity of diffusely reflected light and the intensity of glare.

If the material reflects light only diffusely, then we consider the intensities of the reflected and refracted light to be zero. Go to step 12. Else go to step 8.

If the maximum recursion depth is reached, then take the intensities of the reflected and refracted light to be zero. Go to step 12. Else go to step 9.

Calculate the vector of the reflected beam. Running a recursion to find the intensity of the reflected beam.

Calculate the refracted ray vector. Running a recursion to find the intensity of the refracted beam.

Calculation of the total color intensity. Total intensity includes scattered light intensity, local intensity, and intensities of reflected and refracted rays.

Return to the call point of the ray intensity calculation function.

If the primary ray was calculated, then a pixel of the calculated color is placed in the frame buffer. Let's proceed to the calculation of the next pixel of the frame buffer. If the reflected (refracted) beam was calculated, then the calculated intensity will be taken as the intensity of the reflected (refracted) beam at the previous recursion step.

Building shadows.

Solid shadows.

To construct solid shadows in the tracing algorithm, at the stage of calculating the "local" color intensity at the point of the object, the "visibility" of each light source from this point is checked.

The principle of the algorithm.

A ray directed to the light source is constructed from the tested point.

A search is made for intersections of this ray with the scene primitives between the point being checked and the source.

If at least one intersection is found, then the point being checked is in the shadow. When calculating its color, the source for which the check was carried out is not taken into account.

verifiable source.

This method of finding shadows gives an acceptable result as long as there are no transparent objects in the scene. However, the solid black shadow of the glass does not look realistic. Glass partially transmits light, so the intensity of the obscured source must be taken into account when calculating the intensity of light at the point of the object, but it must be attenuated when light passes through the glass.

Mathematical and physical background of the back ray tracing algorithm.

Lighting.

The intensity of the light is the sum of the intensity of the backlight of the scene, the intensity of the diffusely reflected light of the sources, the intensity of the glare from the sources ("local" illumination characteristics), the intensity of the specularly reflected beam, and the intensity of the refracted beam, if any.

The backlight intensity (IA) is given by some constant.

The intensity of diffusely reflected light (ID) is calculated according to the classical "cosine law".

ID = IL cos α,(2.2.1.6)

where IL is the intensity of the light source, α is the angle between the normal to the surface and the direction to the source.

In the case of lighting the scene with several sources, Id is calculated for each of them and then summed up.

IDi =Σ ILi cos αi.(2.2.1.7)

The flare intensity from the source (IS) is calculated according to the Phong model.

IS = IL cosp β,(2.2.1.8)

where IL is the intensity of the light source (0<=IL<=1), β - угол между отраженным лучом от источника света и направлением на точку, в которой расположена камера (центр проекции), p - некоторая степень от 1 до 200 -влияет на размытость блика. При

At low values ​​of p, the highlight is more blurred.

As with the calculation of ID, in the case of illumination of the scene by several sources, IS is calculated separately for each source, and then the results are summed.

ISi =Σ ILi cosp β i.(2.2.1.9)

The intensities of specularly reflected (IR) and refracted (IT) light are calculated for the reflected and refracted rays in the next step of the recursion. If the recursion depth limit is reached, then these intensities are taken to be zero. From the intensity of IR, r percent is taken, and from IT - t = 1 - r (see the previous section).

In addition, the following coefficients are introduced: KD - coefficient of diffuse reflection of the surface, KS - glare coefficient. - this coefficient is a characteristic of the roughness of the reflective surface. The greater the roughness of the surface, the less light is reflected from it specularly and the less light it transmits, and accordingly the more light it reflects diffusely.<= KD <= 1.

At KD = 0 - all light falling on the surface is reflected and refracted. KD = 1 - all light is reflected diffusely. The intensity of the diffusely reflected light and the intensity of the backlight are multiplied by this factor. The intensities of specularly reflected and refracted light are multiplied by (1 - KD). This coefficient is responsible for the brightness of the glare from the source. 0<=KS<=1.

At KS = 0 - the highlight is not visible, at KS = 1 - the brightness of the highlight is maximum.

Thus, the final formula for calculating the intensity of an object at some point will look like this:

I = IAKD + ​​Σ(ILiKDcos αi + ILiKScosp β i) + (1 - KD)(IRr + IT(1 - r)).(2.2.1.10)

It should be noted that the final intensity should not be more than one. If this happens, then this point of the image will be overexposed. Its intensity must be reset by one.

To obtain a color image, it is necessary to carry out calculations separately for the red, green and blue light components. The color of an image pixel will be calculated by multiplying each intensity component by a number that determines the maximum number of image intensity gradations. For a 32-bit image, it is equal to 255 for each of the colors (R, G, B).

255*IR,= 255*IG,= 255*IB.

Here IR (not to be confused with the intensity of specularly reflected light), IG, IB are the intensities of the three light components at a point, obtained by the formula above.

The coefficients KD, KS, p are the individual characteristics of the object, reflecting its properties. In addition, there is one more coefficient - the absolute refractive index n. n = c / v, where c is the speed of light in vacuum, v is the speed of light in the medium (inside the object). For absolutely opaque bodies, this coefficient is equal to ∞ (because the speed of light inside the body is zero). In the program, to specify an absolutely opaque body, you must set this coefficient >> 1 (about 10,000). In this case, the proportion of specularly reflected light r will tend to unity, and refracted, respectively, to zero.

Calculation of normals.

In the tracing algorithm, normals to objects are needed to calculate the reflected and refracted rays, as well as to determine the illumination according to the Phong model.

In this program, there are three types of primitives from which the scene is built. These are a polygon (triangle), an ellipsoid and a paraboloid. The last two are introduced for a more realistic imitation of a glass (it could also be built from polygons, but the model would have turned out to be more rough).

Calculation of the normal to a polygon (triangle).

The calculation of the normal to a triangle is reduced to the operation of vector multiplication. Let triangle ABC be given by the coordinates of its three vertices:

XA, YA, ZA, XB, YB, ZB, XC, YC, ZC.

Let's calculate the coordinates of two vectors, for example AB and AC:

XB - XA,= XB - XA,

ZAB = XB - XA,(2.2.2.1)= XC - XA,= XC - XA,= XC - XA.

The normal vector coordinates will be calculated using the formulas:

YABZAC - YACZAB,= XABZAC - XACZAB,(2.2.2.2)= XABYAC - XACYAB.

There is no need to calculate the coordinates of the normal vector to the triangle every time in the trace body, since the normals are the same at any point of the triangle. It is enough to count them once in the initializing part of the program and save them. When you rotate a triangle, you must also rotate its normal.

Calculation of the normal to the surface of the second order.

The surface of the second order is given in the general case by an equation of the form:

Q(x,y,z) = a1x2 + a2y2 + a3z2 + b1yz + b2xz + b3xy + c1x +c2y +c3z + d =0.

But we will use a different form of notation. So the ellipsoid equation will look like this:

(x-x0)2/A2 + (y-y0)2/B2 + (z-z0)2 /C2 = 1,(2.2.2.3)

where x0, y0, z0 are the coordinates of the center of the ellipsoid, A, B, C are the lengths of the semiaxes of the ellipsoid.

Paraboloid equation:

(x-x0)2/A2 + (y-y0)2/B2 - (z-z0)2 /C2 = 1,(2.2.2.4)

where x0, y0, z0 are the coordinates of the center of the paraboloid, A, B, C are the lengths of the semi-axes of the paraboloid. The axis of the paraboloid is located along the Oz axis of the world coordinate system. To calculate the coordinates of the normal vector, it is necessary to calculate the partial derivatives with respect to x, y, z.

Ellipsoid normal vector coordinates:

Yn \u003d 2 (y-y0) / B2, \u003d 2 (z-z0) / C2.

The direction of a vector will not change if all its coordinates are divided by 2:

Xn = (x-x0)/A2,= (y-y0)/B2,(2.2.2.5)

Zn \u003d (z-z0) / C2.

The coordinates of the normal vector of a paraboloid are calculated similarly:

Xn = (x-x0)/A2,= (y-y0)/B2,(2.2.2.6)

Zn \u003d - (z-z0) / C2.

The normal for the second-order surface will have to be calculated directly in the trace body, since the normals are different at different points of the figure.

Calculation of the reflected beam.

Let the vector of the incident ray S be given, and the normal vector N be known. It is required to find the vector of the reflected ray R.

Consider unit vectors R1, S1, and N1. Since the vectors of the normal, the incident ray and the reflected ray are in the same plane, we can write R1 + S1 = N`, where N` is the vector corresponding to the diagonal of the rhombus and coinciding in direction with the normal. The length of the vector N` is equal to 2cosθ. Since the vector N` coincides in direction with N1, then

N` = N`2cosθ.

From here we find the unit vector of the reflected beam:

R1 = N1 2cosθ - S1 = N/|N| 2cosθ - S/|S|.

Let's find cosθ. This can be done using the scalar product of the vectors N and S:


Assuming that the required vector of the reflected beam will have the same length as the vector of the incident beam, that is, R = |S| R1, we get

N 2NS/|N|2 - S.

This solution is in vector form. Let's write the coordinates of the vector:

2xN(xNxS+yNyS+zNzS)/(xN2+yN2+zN2) - xS,= 2yN(xNxS+yNyS+zNzS)/(xN2+yN2+zN2) - yS,(2.2.3.1)= 2zN(xNxS+yNyS +zNzS)/(xN2+yN2+zN2) - zS.

Calculation of the refracted beam.

Let two unit vectors be given: S1 is the vector of the incident beam, and N1 is the vector of the normal to the interface between two media. Two refractive indices for these media must also be known - n1 and n2 (or their ratio).

It is required to find the unit vector of the refracted ray T1. For the solution, we perform some geometric constructions.

The desired vector T1 is equal to the sum of two vectors:

Let us first find the vector NT. It is opposite in direction to the normal vector, and its length is equal to |T1| cos α2 = cos α2 (because T1 is unit). Thus, NT = -N1 cos α2. It is necessary to determine cos α2. Let us write the law of refraction n1 sin α1 = n2 sin α2 in the form:

sinα2 = n sinα1,

where n = n1 / n2.

Let us use the identity cos2α + sin2α = 1. Then

cosα2 = √ 1 - sin2α2 = √ 1 - n2 sin2α1

cosα2 = √ (1 + n2 (cos2α1 - 1)

The value of cos α1 can be expressed in terms of the scalar product of the unit vectors S1 and N1, that is, cos α1 = S1N1. Then we can write the following expression for the NT vector:

N1√1+n2((S1N1)2 - 1).

It remains to find an expression for the vector B. It is located on the same line with the vector A, and A = S1 - NS. Considering that NS is equal to N1 cos α1, then A = S1 - N1 cos α1. Since cos α1 = S1N1, then A = S1 - N1 (S1N1).

Since the length of vector A is equal to sin α1, and the length of vector B is equal to sin α2, then

|B|/|A| = sinα2/ sinα1 = n2/n1 = n,

whence |B| = n |A|. Taking into account the relative position of vectors A and B, we obtain

NA =n(N1(S1N1) - S1).

Now we can write the desired expression for the unit vector of the refraction ray T1:

T1 = nN1 (S1N1) - nS1 - N1√1 + n2 ((S1N1)2 - 1).(2.2.4.1)

Calculation of the point of intersection with primitives.

In the tracing algorithm, to build an image, it is necessary to calculate the intersection points of the rays with the scene primitives. The beam is given by a parametric equation of a straight line. Any point of the beam satisfies the equation

R = A + Vt,(2.2.5.1)

where R is the radius vector of an arbitrary point belonging to the ray, A is the radius vector of the initial point of the ray, V is the direction vector of the ray, t is a parameter.

If the direction vector V is normalized, then the parameter t will be numerically equal to the distance from the starting point of the beam A to the point R.

You can write this equation in coordinate form:

x = x1 + at,= y1 + bt,(2.2.5.2)= z1 + ct.

Here x1, y1, z1 are the coordinates of the initial point of the beam in the rectangular Cartesian world coordinate system, a,b,c are the coordinates of the direction vector of the beam.

Calculation of the point of intersection of a ray with a surface of the second order.

To find the intersection point of a ray given by equations (2) with a second-order surface given by equations (2.2.2.3) or (2.2.2.4):

(x-x0)2/A2 + (y-y0)2/B2 + (z-z0)2 /C2 = 1 (ellipsoid)

(x-x0)2/A2 + (y-y0)2/B2 - (z-z0)2 /C2 = 1 (paraboloid),

instead of x, y, and z, the corresponding ray equations must be substituted into the second-order surface equation. As a result, after opening all the brackets and bringing similar ones, we get a quadratic equation with respect to the parameter t. If the discriminant of the quadratic equation is less than zero, then the ray and the second-order surface have no common intersection points. Otherwise, it will be possible to calculate two values ​​of the parameter t. The discriminant can be equal to zero - this corresponds to the limiting case of a ray touching the surface, and we will get two coinciding values ​​of the parameter t.

To find the coordinates of the points of intersection of the ray and the surface, it is sufficient to substitute the found values ​​of the parameter t into the ray equations (2).

In the program, when two intersections are found, the closest one is selected for visualization. The nearest intersection is determined by comparing the found parameters t. Closer to the observation point is the intersection, which corresponds to the smaller parameter t. It should be noted here that as a result of solving the quadratic equation, one or both values ​​of the parameter t can turn out to be negative. This means that the point of intersection lies "behind" relative to the point of origin of the beam, on the half of the straight line, which is "on our side" relative to the picture plane. Such points are discarded when searching for an intersection.

In addition, in the program for each figure, the upper and lower cutting planes are introduced. Only the part of the figure lying between them is displayed.

To do this, after finding the intersection point, its z-coordinate is analyzed.

Calculation of the intersection point of a ray with a polygon (triangle).

To calculate the point of intersection of the ray given by equations (2), you must first determine the point of intersection of this ray with the plane containing this triangle.

The plane equation looks like this:

Q(x, y, z) = Ax + By + Cz + D = 0.(2.2.5.3)

Here the coefficients A, B, C coincide with the coordinates of the normal to this plane. The coordinates of the normal of the plane coincide with the coordinates of the normal of the triangle, which we calculated at the stage of loading the scene.

To find the free term D, it is necessary to substitute the coordinates of any point of the triangle, for example, one of the vertices.

Ax-By-Cz.(2.2.5.4)

The D value will not change during the program execution, so it is advisable to calculate it when the scene is initialized and store it, like the normal coordinates. It is necessary to recalculate it only when the position of the triangle changes.

Now, to find the intersection point, we substitute the ray equations (2) into

plane equation.

(x1 + at) + B (y1 + bt) + C (z1 + ct) + D = 0

Where do we get

= - (Ax1 + By1 + Cz1 + D) / (Aa + Bb + Cc)(2.2.5.5)

If the denominator of this fraction is zero, then the ray is parallel to the plane in which the triangle lies. There is no intersection point.

To find the coordinates of the intersection point, it is necessary to substitute the found value of the parameter t into the ray equations (2). Let's call the intersection point D. We will get the coordinates xD, yD, zD.

Now we need to determine if point D is inside the triangle. Find the coordinates of the vectors AB, BC, CA (A, B, C are the vertices of the triangle) and the coordinates of the vectors AD, BD, CD. Then we find three cross products:

nA = AB x AD,= BC x BD,(2.2.5.6)= CA x CD.

These vectors will be collinear. If all three vectors are codirectional, then point D lies inside the triangle. The co-direction is determined by the equality of the signs of the corresponding coordinates of all three vectors.

The operation of checking whether point D belongs to triangle ABC can be accelerated. If we orthogonally project triangle ABC and point D onto one of the planes xOy, yOz or xOz, then the projection of the point into the projection of the triangle will mean that the point itself falls into the triangle (of course, if it is already known that the point D lies in the plane containing the triangle ABC ). At the same time, the number of operations is significantly reduced. So, to find the coordinates of all vectors, you need to look for two coordinates for each vector, and when searching for vector products, you need to look for only one coordinate (the rest are equal to zero).

To check the co-direction of the vectors obtained when calculating the cross product, you need to check the signs of this single coordinate for all three vectors. If all signs are greater than zero, or less than zero, then the vectors are codirectional. The equality to zero of one of the vector products corresponds to the case when the point D falls on a straight line containing one of the sides of the triangle.

In addition, a simple dimensional test can be performed before calculating vectors and cross products. If the projection of the point D lies to the right, to the left, above or below each of the projections of the vertices of the triangle, then it cannot lie inside.

It remains to add that for projection it is better to choose one of the planes, the projection area of ​​the triangle on which is larger. Under this condition, the case of projection of a triangle into a segment is excluded (provided that the tested triangle is not degenerate into a segment). In addition, as the projection area increases, the probability of error decreases. To determine such a projection plane, it is enough to check the three coordinates of the normal of the triangle. If the z-coordinate of the normal is greater (in absolute value) x and y, then it is necessary to project onto the xOy plane. If y is greater than x and z, then we project onto xOz. In the remaining case - on yOz.

Description of data types. Program structure.

Description of program modules

List of modules: .h-description of the TTex structure.h-description of the TPlaneTex and TEllipsoidTex structures.h-description of the TPoint2d and TPoint3d structures.h-description of the TRGBColor structure.h-description of the TLamp ​​class.h-description of the TCam class.h-description of the TPrimitive class. h-class description TFrstSurface.h-class description TScndSurface.h-class description TTriangle.h-class description TEllipsoid.h-class description TCylinder.h-class description THyperboloidVert.h-class description THyperboloidHor.h-class description TScene.h- TTracer class description

Implementing modules, program interface:

Options.h-module of the "Options" form

ExtraCamOptions.h-module of the "Camera properties" form

MainUnit.h - module of the main form of the program

A brief description of the structures and classes of the program: TPoint3d - a structure that describes a point in the world coordinate system, TPoint2d - a structure that describes a point on a plane (in a texture) with integer coordinates, TRGBColor - a structure that describes a color in three components (RGB), TTex - a structure , describing the texture - contains the address of the array of pixels and its dimensions, TPlaneTex - a structure that describes the binding of the texture to the plane.

Contains three points to which the texture is attached: TLamp ​​- a class that describes the light source.

Contains a TPoint3d coord object with source coordinates and three float variables (Ir, Ig, Ib) for storing the intensity of the three light components. TCam is a class that describes a camera.

Contains two angles (a, b) indicating the camera's direction of view, the point the camera is pointing at (viewP), and the distance from the camera to that point (r). TPrimitive is an abstract primitive class. Surfaces of the first and second order are inherited from it.TFrstSurface is an abstract class of the first order surface. The triangle class is inherited from it. TScndSurface is an abstract second-order surface class. The ellipsoid and paraboloid classes are inherited from it. TTriangle - triangle class. Contains three vertices of a triangle and its normal. TCylinder - class of a cylinder. THyperboloidVert - class of a one-sheeted hyperboloid lying along the oZ axis.THyperboloidHor - class of a one-sheeted hyperboloid lying along the oX axis.TEllipsoid - class of an ellipsoid.TScene - class of a scene. Contains information about all primitives, sources, and the camera. TTracer is the class responsible for rendering the image. Contains a buffer (buffer) with a layout of 400x400 pixels, in which the image of the scene is formed. Before generation, it is necessary to call the function, passing it as a parameter a pointer to the scene to be generated. To generate, call the render function.

All classes - descendants of TPrimitive provide the following functions: getT(TPoint3d p0, TPoint3d viewDir) - returns the distance from the point of origin (p0) of the viewDir ray to the nearest point of intersection with the primitive.

void getTArr(float* arr, int& n, TPoint3d p0, TPoint3d viewDir) - fills the arr array with distances from the start point (p0) of the viewDir ray to the nearest point of intersection with the primitive.

void getNormal(TPoint3d& n, const TPoint3d& p) - returns the coordinates of the normal vector to the primitive at point p.

void getColor(TRGBColor& c, const TPoint3d& p) - returns the color of the primitive point p (including texture).

3. TECHNOLOGICAL PART

Choice of programming language.

When developing the program, the high-level programming language C ++ was used as part of the visual programming environment CodeGear RAD Studio for Windows.

This language was chosen due to the fact that it provides the most convenient means for working with RAM, allows you to implement algorithms more efficiently than other high-level languages. Programs written in C++ run faster and take up less disk space.

In addition, the visual programming environment CodeGear RAD Studio for Windows

provides a large number of standard visual components for creating an interface, and a number of libraries with various commonly used useful functions. Also, the author of the work has the greatest programming experience in the specified visual programming environment.

Options form. Lighting tab.

This tab contains tools for setting up scene lighting.

Source coordinates - coordinates in the world coordinate system of the light source selected in the drop-down list.

Source intensity - the values ​​of the three components of the intensity of the light source selected in the drop-down list.

Background intensity - the values ​​of the three components of the background intensity.

Button “+” (next to the drop-down list) - adding a new light source.

Button “-” (next to the drop-down list) - deletes the light source selected in the drop-down list.

Options form. Camera tab.

This tab contains tools for setting camera options.

Preview - here you can see an approximate view of the image before it is generated.

Navigation - camera position settings.

Advanced - when you click on this button, a form appears

Camera properties with advanced camera options.

Camera properties form.

Radius - the distance from the camera to the point at which it is directed.

Radius change step - increment of the camera radius by pressing the “-” button once on the “Camera” tab of the “Options” form (or decreasing it by pressing the “+” button once).

Options form. materials tab.

This menu displays the parameters of the material of the table on which the stage stands.

Color - the color of the table material.

Coef. diffuse reflection - coefficient Kd of the table material (see section 2.2.1).

Texture - if the checkbox is checked, then the texture will be displayed on the table

Select texture - select an image file (*.bmp) that will be used as the table texture.

Advanced - when this button is clicked, the Table properties form appears with additional parameters of the table material.

Table properties form.

The glare coefficient is the KS coefficient of the table material (see section 2.2.1).

Glare blur is an exponent p of the table material.

Texture repeats - how many times the table texture will repeat along the OX and OY axes.

Options form. System tab.

On this tab, you can configure the algorithms implemented in the program.

Recursion depth - this parameter sets the recursion depth in the tracing algorithm. Larger values ​​of this parameter improve the quality of the generated image.

ATTENTION!

The depth of recursion has a STRONG effect on the speed of image generation. It is not recommended to set this parameter to more than 10.

Anitialiasing - enabling the image smoothing algorithm.

Shadow type - selection of the shadow construction algorithm.

4. RESEARCH PART

The studies were carried out on a computer with the following configuration:

CPU - Intel Core 2 Duo T5850- 2048Mb DDR2 - Nvidia GForce 9300M 256Mb- Windows 7

4.1 Dependence of generation time on recursion depth

In this test, the dependence of the image generation time on the recursion depth was investigated. The studies were carried out for a scene illuminated by a single light source. - generation time without a shadow in seconds. - generation time with a solid shadow in seconds. - recursion depth.


4.2 Dependence of the generation time on the number of sources


4.3 Analysis of research results

From the first study, it can be seen that the generation time grows strongly with the number of levels of recursion. This is in good agreement with the theory, since the number of rays grows with the recursion depth.

It should be noted that for scenes with a small number of polygons, there is no need to set large values ​​for the maximum recursion depth, because the difference in the quality of the generated image will be insignificant.

The second study shows that the dependence of the generation time on the number of light sources is linear. From the obtained values, you can calculate the time required to calculate one source. On the machine on which the research was conducted, with a recursion depth of 5, this time is approximately 0.5 seconds.

CONCLUSION

In this program, the results of the algorithm for generating realistic images - reverse ray tracing - were demonstrated.

This implementation demonstrates the ability of the algorithm to build images close to photorealistic. Tracing is one of the most advanced algorithms for generating realistic images. The quality of the resulting image is incomparably better than the quality of the image obtained using algorithms such as Z-buffer. However, the requirements for computing power required to generate one image frame are much higher than in the same Z-buffer. To date, in real time, the reverse ray tracing algorithm is used only for research purposes on super-powerful computers that are inaccessible to a simple user. Of course, there are enthusiasts who create 3D games and other real-time graphics applications based on the reverse ray tracing algorithm, but as a rule they have an extremely low FPS, or all objects in the scene are based on a sphere - the easiest to trace rays surface. But in order for this algorithm to become profitable in mass projects, such as 3D games, a significant breakthrough in the field of desktop computer hardware is required.

Even on the example of computer games, one can easily trace the redundancy of the reverse ray tracing algorithm. After all, the player, being passionate about the gameplay, is unlikely to admire the geometrically correct rendering of shadows and reflections of game objects. In this regard, approximate drawing with the help of polygons today wins significantly, because it does not require a powerful computer, and the results are close to reality.

It is also considered that the ray tracing algorithm is ideal for images of artificial objects with geometrically simple shapes, such as cars, planes, buildings, etc. Generating objects such as a human face, animal hair, or a forest area is an extremely difficult task for the algorithm, which increases so considerable requirements for the hardware of the computer.

However, even today you can see research on the implementation of the real-time reverse ray tracing algorithm. As a rule, in such projects, a car is used as a scene. But the absolute photorealism of the image has already been achieved, and besides, it takes very little time to generate a single frame. Of course, these projects are implemented on super-powerful computers, but the day is not far off when such 3D applications will become available to the average user.

BIBLIOGRAPHY

1. Rogers D. Algorithmic foundations of computer graphics: Per. from English - M.: Mir, 1989. - 512 p.

Porev VN Computer graphics. - St. Petersburg: BHV-Petersburg, 2002. - 432 p.

Nikulin E.A. Computer geometry and computer graphics algorithms. St. Petersburg: BHV-Petersburg, 2003. - 560 p.

Angel E. Interactive computer graphics. - "Williams", 2001. - 592 p.: ill. - Paral. Tit. From English.

Avdeeva S.M., Kurov A.V. Algorithms for 3D Computer Graphics: Textbook. - M.: Publishing house of MSTU im. N.E. Bauman, 1996. - 60 p.