Ray tracing vs rasterization: the next generation of graphics quality? What is ray tracing and do we need it in games?

Subscribe
Join the “koon.ru” community!
In contact with:

Direct tracing. In the direct tracing method, a beam of rays is generated, leaving the source in all possible directions.

Most of the rays emitted by the source do not reach the receiver, and therefore do not affect the image formed in it. Only a very small portion of the rays, after all the reflections and refractions, eventually reach the receiver, creating an image of the scene in its receptors. On rough surfaces, many diffusely reflected rays appear. All of them need to be generated and tracked programmatically, which greatly complicates the tracing task.

The passage of a beam in a non-ideal medium is accompanied by scattering and absorption of light energy on its microparticles. These physical processes are extremely difficult to adequately model on a computer with its finite computing resources. In practice, they are limited to using the attenuation coefficient of the beam energy per unit distance traveled by it. Similarly, coefficients for reducing the energy of a beam during its reflection and refraction at the interface between media are introduced. Taking these coefficients into account, the decrease in the energy of all primary and secondary rays as they wander through the scene space is monitored. As soon as the energy of a certain ray becomes less than a specified absolute level or decreases by a specified number of times, tracing of this ray stops.

Thus, the main Disadvantages of the direct tracing method are its high labor intensity and low efficiency. When implementing the method, most of the work on calculating the intersections of rays with objects is done in vain.

Backtrace. The traceback method was developed in the 80s. The works of Whitted and Kay are considered fundamental.

To cut off rays that do not reach the receiver, it is enough to consider the observer as a source of return rays. The primary ray will be considered to be the ray V from the observer to any point on the surface of the object.

Using the methods discussed above, secondary, tertiary, etc. are calculated. rays. As a result, for each primary ray, a tracing tree is built, the branches of which make up the secondary rays. The route branch ends if:

● the beam goes beyond the scene,

● the beam meets an opaque body that absorbs light,

● the beam hits the light source,

● the beam intensity drops below the sensitivity threshold,

● the number of primary beam splits becomes too large for the available machine resources.

The resulting direct light energy (color and intensity) entering the receiver from the direction V, is composed of the energies of the terminal vertices of the tree, taking into account their losses during propagation in optical media.


The traceback method actually accumulates all the rays that actually arrive at the receiver from a particular direction, regardless of their origin. This allows you to see and display on the screen:

● opaque objects that absorb return rays;

● transparent objects through which other objects are visible to the observer due to refraction;

● reflections of objects on mirror surfaces, including glare corresponding to the return rays hitting the light source;

● shadows formed at surface points obscured from the source by other objects;

● various other optical effects.

The number of "probing" return rays subject to tracing is limited by the number of points on the surfaces of scene objects visible from the observer's location and moved with a final step depending on the screen resolution. Due to this, the amount of computational costs in the backward tracing method is significantly reduced compared to the forward tracing method. It is possible to combine both methods to optimize algorithms and reduce their complexity.

Tracing algorithms are a recursive procedure that calls itself when a secondary ray appears (the analyzed ray is reflected or refracted). Most of the calculations when implementing tracing methods involve calculating the intersections of rays with surfaces, and therefore they are used to depict optical effects in scenes with a small number of objects.

When implementing the backtracing method in practice, the following restrictions are introduced: . Some of them are necessary to be able to solve the problem of image synthesis in principle, and some restrictions can significantly improve the performance of tracing.

Limitations of the traceback method:

1. Among all types of objects, let's highlight light sources. They can only emit light, but cannot reflect or refract it. Typically point sources are considered.

2. The properties of reflective surfaces are described by the sum of two components: diffuse and specular.

3. Specularity, in turn, is also described by two components. The first (reflection) takes into account reflection from other objects that are not light sources. Only one specularly reflected beam is built r for further tracing. The second component (specular) means glare from light sources. To do this, rays are directed to all sources and the angles formed by these rays with the specularly reflected back tracing ray are determined ( r). In specular reflection, the color of a point on a surface is determined by the color of what is being reflected. In the simplest case, the mirror does not have its own surface color.

4. With diffuse reflection, only rays from light sources are taken into account. Rays from specularly reflective surfaces are ignored. If the beam directed at a given light source is blocked by another object, then this point of the object is in the shadow. With diffuse reflection, the color of an illuminated point on a surface is determined by the surface's own color and the color of the light sources.

5. For transparent objects, the dependence of the refractive index on wavelength is usually not taken into account. Sometimes transparency is modeled without refraction at all, i.e. direction of the refracted ray t coincides with the direction of the incident beam.

6. To take into account the illumination of objects by light scattered by other objects, a background component (ambient) is introduced.

7. To complete the tracing, a certain threshold illumination value is introduced, which should no longer contribute to the resulting color, or the number of iterations is limited.

Positive features of the traceback method:

● versatility, applicability for image synthesis of fairly complex spatial scenes. Embodies many laws of optics. Various projections are simply realized;

● even truncated versions of this method make it possible to obtain fairly realistic images. For example, if we limit ourselves to only the primary rays (from the projection point), then this results in the removal of invisible points. Tracing just one or two secondary rays gives shadows, specularity, and transparency;

● all coordinate transformations (if any) are linear, so it’s quite easy to work with textures;

● for one pixel of a raster image, you can trace several closely spaced rays, and then average their color to eliminate the aliasing effect;

● since the calculation of a single image point is performed independently of other points, this can be effectively used when implementing this method in parallel computing systems in which rays can be traced simultaneously.

Disadvantages of the traceback method:

● problems with modeling diffuse reflection and refraction;

● for each point in the image it is necessary to perform many computational operations. Ray tracing is one of the slowest image synthesis algorithms.

Ray tracing and rasterization - what's the difference?

We are not sure that all our readers know or remember what it is ray tracing, how different rendering methods differ in principle and what their advantages and disadvantages are. Therefore, first of all, let’s try to talk about it very briefly, without complex mathematics and more or less clearly. Before we move on to ray tracing, we need to remember the basics of the classic algorithm rasterization with Z-buffer.

In the rasterization method, now generally accepted in modern real-time graphics, for drawing each object there is a projection onto the screen plane from the geometric primitives (polygons, most often triangles) that make up the object. Triangles are drawn pixel by pixel using a depth buffer, which contains the distance to the screen plane and is necessary to ensure that the triangles closest to the camera overlap those farther during rendering.

In addition to the vertices and the polygons connecting them, it also stores information about the color, texture coordinates, and normals needed to define the front and back portions of each surface. Pixel color is determined through complex calculations in vertex and pixel shaders, and effects such as shadows are rendered using additional passes, but also using rasterization.

Shading process ( shading) consists of calculating the amount of lighting for a pixel, taking into account the overlay of one or more textures on the pixel, which determines its final color. All this requires a lot of calculations, because the scenes of modern games contain several million polygons and several million pixels on high-resolution screens, and information on the screen must be updated at a rate of at least 30 frames per second, and better yet, 60 FPS. Not to mention virtual reality helmets, where you need to simultaneously draw images for both eyes at a frequency of 90 FPS.

But since GPUs operate at very high clock speeds and have a large number of hardware units specialized in certain calculations, and rasterization lends itself very well to parallelization, there are no particular problems with rendering performance, and the vast majority of 3D computer games use rasterization. In reality, things are a little more complicated, since many additional optimizations are used in order to avoid drawing a lot of invisible triangles, but this is the essence of rasterization in general.

During the development of GPUs throughout their development, a huge amount of time has been spent optimizing the work of discarding invisible geometry and reducing the computational load. First, we discarded the rendering of objects outside the visibility zone, then objects obscured by others, closer to the camera, etc. The developed optimizations for rasterization are quite effective, in modern games invisible objects consume almost no computing resources, significantly reducing the amount of work required to rasterize the scene. You will understand further why we touch on the topic of invisible objects.

To calculate global illumination, draw shadows and other effects, you have to use cunning hacks based on the same rasterization. As a result, over the years, GPUs have become very sophisticated, learning to speed up geometry processing in vertex shaders, render pixels efficiently using pixel shaders, and even use universal compute shaders to calculate physics, post-effects, and many other calculations. But the basis of the GPU operation remained the same all the time.

Ray tracing has a completely different basic idea, but in theory it’s almost simpler. Tracing simulates the propagation of light rays across a 3D scene. Ray tracing can be performed in two directions: from light sources or from each pixel in the opposite direction, then usually several reflections from scene objects are determined in the direction of the camera or light source, respectively. Calculating rays for each pixel in the scene is less computationally demanding, and projecting rays from light sources gives higher rendering quality.

Backtrace was first described in 1969 by an IBM employee in the work "Some Techniques for Shading Machine Renderings of Solids" and this technique calculates the path of the light beam for each pixel on the screen depending on the 3D models in the scene. 10 years later, there was another breakthrough in technology, when researcher Turner Whitted (now working at Nvidia Research, by the way) published a paper "An Improved Illumination Model for Shaded Display", which showed how to calculate shadows, reflection and refraction of light when tracing.

A couple of other papers in the 1980s further described the fundamentals of ray tracing for computer graphics, which led to a revolution in synthetic imaging in the film industry. So, in 1984, several employees Lucasfilm described how to use ray tracing to create effects such as motion blur, depth of field, soft shadows, blurred reflections and refractions. A couple of years later, Caltech professor Jim Kajiya in his work "The Rendering Equation" described a more precise way of scattering light in a scene. And since then, ray tracing has been used literally everywhere in the film industry.

So, in the common method of inverse ray tracing, for each pixel on the screen, an imaginary ray is drawn from the camera to an object in the scene. In this way, a ray of light entering the camera from a light source in this direction is simulated, and the first intersection with the object is used to determine the color of the pixel. Primary rays determine the visibility of objects (like a Z-buffer in rasterization), and to determine color, you need to draw further secondary rays from the intersection points to different light sources (if the rays are blocked by an object, then the light source will not affect the illumination of the pixel), and the set of secondary rays is determined by the illumination falling on the pixel.


But all the most interesting things happen even further - to achieve photorealism, you need to take into account the characteristics of materials in the form of the amount of light reflected and refracted by them, and to calculate the color of a pixel, you need to draw more reflection and refraction rays. They are not indicated in the figure above, but they can be mentally imagined as rays reflected from the surface of the ball and refracted by it. This improved ray tracing algorithm was invented several decades ago, and these additions were a big step towards increasing the realism of a synthetic image. To date, the method has acquired many modifications, but they are always based on finding the intersection of light rays with scene objects.

First practical experiments on the implementation of ray tracing in real time began quite a long time ago, at a well-known conference SIGGraph Similar developments appeared frequently. Demonstrations of real-time tracing date back to the late 1980s and achieved speeds of several frames per second using highly optimized techniques and multiple shared memory computing systems. Since then, many developments have appeared designed to speed up tracing for work, including on one PC.

Not to mention the numerous 3D engine enthusiasts of the demo scene in the late 90s and beyond, who were inspired by the possibilities and fundamental simplicity of the method, introducing many useful optimizations into ray tracing. At one time, we published on our website a whole series of materials devoted to one of the ray tracing software engines, very specific and with a lot of serious limitations that did not allow us to create serious game projects based on it:

Hardware manufacturers did not lag behind, and for a long time they had been showing experimental prototypes of tracing accelerators and demo programs optimized for them at exhibitions. So, in June 2008 the company Intel showed a special version of the game Enemy Territory: Quake Wars (Quake Wars: Ray Traced), which uses ray tracing when rendering at a resolution of 1280x720 at a speed of 15-30 frames per second, which is already considered real time. That demonstration did not use hardware accelerators, but ran on 16 Xeon cores at a frequency of 3 GHz.

The Intel project demonstrated the benefits of ray-traced rendering, demonstrating realistic water, object shadows through transparent surfaces, and reflections. The development of the demonstration became the project Wolfenstein: Ray Traced, and various enthusiasts often take the engine of the series Quake to add tracing - so at the suggestion of modders in Quake 2 realistic reflections appeared, which were spoiled by very strong noise and the highest system requirements.

And the company showed prototypes of hardware tracing accelerators for several years (from 2012 to 2016). Imagination Technologies, which even offers an open API for ray tracing - OpenRL. It was stated that the company's hardware development accelerator is capable of running Autodesk Maya and providing real-time ray tracing. However, the company did not have enough funds to promote hardware acceleration of ray tracing for success, as well as the “weight” of this company in the graphics market to be its locomotive. And the demo programs were not the most impressive, to be honest, although they showed some of the advantages of tracing:

The company did much better Nvidia, which announced the technology back at SIGGraph 2009 OptiX, designed for real-time ray tracing on their GPUs. The new API has opened access to ray tracing in professional applications with the necessary flexibility, in particular - bidirectional path tracing and other algorithms.

Renderers based on OptiX technology already exist for numerous professional software, such as Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max and other applications, and are used by professionals in their work. This can be attributed to real-time rendering only with certain assumptions, because at a high frame rate the result was a very noisy picture. Only a few years later the industry came close to using hardware acceleration of ray tracing in games.

Pros and cons of ray tracing

The ray tracing rendering technique is highly realistic compared to rasterization, as it simulates the propagation of light rays very similar to how they occur in reality (of course, still not 100% accurate). Tracing can produce highly realistic shadows, reflections and refractions of light, and as such has long been valued in architectural applications and industrial design. The technology helps specialists in this field, long before physical implementation, understand how materials will look under different lighting conditions in the real world.

The obvious advantages of tracing can also include the fact that the computational complexity of the method depends little on the geometric complexity of the scene, and the calculations are perfectly parallelized - you can easily and independently trace several rays at the same time, dividing the screen surface into zones for tracing them on different computing cores. It is also very useful that cutting off invisible surfaces is a logical consequence of the algorithm.

But what is more important is that the method simulates the real propagation of light rays, obtaining a final image of higher quality compared to rasterization. Rasterization has obvious disadvantages - for example, an object not included in the scene will not be rendered on the GPU, but it can cast a visible shadow or should be visible in a reflective surface (mirror), and rasterization optimizations discarded it and did not take it into account. Not to mention, this invisible object can greatly affect the global illumination of the scene by reflecting light onto visible surfaces. These problems are partially solved, in particular, the use of shadow maps allows you to draw shadows from objects invisible in the scene, but the resulting picture is still far from ideal. And the point here is in principle, because rasterization works completely differently than human vision.

Effects such as reflections, refractions and shadows, which are quite difficult to implement well when rasterizing, are a natural result of the ray tracing algorithm. Take reflections - this is just one of the areas in which ray tracing is noticeably better than rasterization. In modern games, reflections are usually simulated using environment maps (static or dynamic) or reflections in screen space ( Screen-Space), which give a good simulation of reflections in most cases, but still have very large limitations, in particular, they are not suitable for closely located objects.

Calculating reflections in screen space makes it possible to obtain reflections that are more or less similar to the truth, subject to some restrictions, but with hardware acceleration on the GPU using rasterization. And with ray tracing, reflections are always rendered perfectly without the need for additional complex algorithms. Another important advantage of tracing is displaying reflections of parts of the same object on each other (for example, so that the handle of a teapot or its spout is reflected on itself), which is much more difficult to do using rasterization.

Another example of a clear advantage of ray tracing is the rendering of transparent objects. Using rasterization, it is very difficult to simulate transparency effects, since its calculation depends on the rendering order and for this you have to pre-sort transparent polygons, and even then visual artifacts may appear. Several hacks have been invented to bypass polygon sorting, but this all results in complications of the method and additional difficulties. But the ray tracing algorithm itself allows you to draw any transparency effects with perfect quality.

Well, the last (for starters) example is drawing shadows. When rasterizing in most cases, shadow maps are used ( shadow mapping), which are also based on rasterization, simply rendering is done from a different point in the scene and with different parameters. The silhouettes of the object are drawn into a separate buffer from the light source, the contents of the buffer are filtered and applied to the surface where the shadow should be cast. These methods have several problems, including the jagged edges you've all seen in games, as well as increased video memory consumption. Ray tracing allows you to solve the problem of shadows automatically, without requiring additional algorithms and memory. Moreover, in the case of a rasterization hack, the result will be a physically incorrect shadow in any case, but a soft shadow drawn using ray tracing will be realistic.

But ray tracing also has a disadvantage. One, but very important, is that drawing everything described above is several times more difficult from a computational point of view. Low performance on existing hardware is the main disadvantage of the tracing method, which for a long time crossed out all its advantages. Finding the intersection of rays with scene objects is not accelerated as easily as relatively simple operations when rasterizing triangles, for which special 3D accelerators have been used for many years, which is why in real-time graphics the rasterization method is still used, which allows you to draw a picture quite quickly, although somewhat inferior in quality to a full-fledged tracing, it is still quite realistic.

When tracing, you need to calculate thousands of rays for each light source, most of which will have little effect on the final image, so you need both additional optimizations for the ray tracing algorithm and new hardware that can speed up the corresponding operations. Plus, the use of tracing alone does not guarantee photorealism. If you use simple algorithms, the result will be good, but still not realistic enough, and to fully simulate reality you need to use additional techniques, such as photon mapping And path tracing, which more accurately simulate the propagation of light in the world.

On the other hand, since the ray tracing algorithm is well parallelized, it can be solved by the simplest technical method - increasing the number of computing cores of the (graphics) processor, the number of which increases every year. At the same time, a linear increase in performance during tracing is ensured. And given the apparent lack of optimization in both hardware and software for ray tracing on GPUs right now, hardware ray tracing capabilities can potentially grow rapidly.

But here smaller problems arise. Rendering only the primary rays in itself is not too complicated, but it will not provide a noticeable improvement in rendering quality compared to classical rasterization, and even with tricky hacks. But secondary rays are much more difficult to calculate because they do not have coherence - unidirectionality. For each pixel, completely new data has to be calculated, which is not very good for caching it, which is important for achieving high speed. Therefore, the calculation of secondary rays is highly dependent on memory delays, which are almost not decreasing, in contrast to memory bandwidth (MBB), which is growing at a rapid pace.

Ray tracing, although it seems like a fairly simple and elegant method that can be implemented in just a few lines of code, is a completely unoptimized algorithm, and high-performance ray tracing code is extremely difficult to make. If with rasterization the algorithm works quickly, but you have to come up with clever methods for complex visual effects, then ray tracing can draw them all initially, but forces you to very carefully optimize the code so that it runs fast enough for real time.

There are many methods for speeding up tracing; the most productive ray tracing algorithms do not process rays one at a time, but use sets of rays, which speeds up the process of processing rays of the same direction. These optimizations are great for running on modern SIMD CPUs and GPUs, they are effective for main co-rays and shadow rays, but still not suitable for refraction and reflection rays. Therefore, it is necessary to seriously limit the number of rays calculated for each pixel of the scene, and to remove the increased “noise” of the image using special filtering.

In addition, ray tracing requires a suitable data structure to store scene elements, and this can have a performance impact. Some structures are better suited for static data, others for dynamically changing ones. So ray tracing only at a superficial glance seems to be a simple and extremely elegant method, but to get the desired performance from it you will have to do a lot of optimization work - no less than simulating complex effects during rasterization. And this work has just begun, in fact.

There are several issues that need to be addressed before ray tracing becomes a viable alternative to rasterization for games. Right now, it appears that the benefits of tracing are not as great as the significant performance penalty associated with using it. Yes, tracing has very important advantages in the form of realistic reflections, shadows and processing of transparent objects, which is difficult to do with rasterization, but... are there enough such objects in games for the lack of realism for them to become serious? On the one hand, most objects in the world reflect light, on the other hand, games have proven that our eyes and brain are content with just approximating realism. In most modern games, reflections on objects, although not completely photorealistic, are often enough to trick our brain.

Yes, ray tracing can provide best quality, than rasterization, but by what forces? If you strive for complete realism, then full-fledged tracing with the calculation of many rays for lighting and reflections, as well as a combination of techniques like radiosity And photon mapping, will be super-demanding of computing power. Often, even offline renders that do not work in real time use simplifications. Of course, after some time, sufficiently high computing power will become available in order to gain an advantage over rasterization, including in terms of performance, but for now we are still very far from this point.

Even with offline rendering for the film industry, as computing power increases, rendering times do not decrease over time, as the appetites of artists grow even faster! And even leading companies in the production of animated films, like Pixar, try to optimize the rendering process by using ray tracing only for some of the effects - precisely because of the significant impact on performance. So you need to understand that the days of full-fledged tracing for the entire scene in real-time games are still very far away. And for full-fledged real-time rendering using ray tracing in games, the computing power is definitely not enough yet. This is a long way to go even with the development of GPUs that is still ongoing.

But in any case, ray tracing is the physically correct way that can solve many large and small problems of the existing approach. Using various hacks and tricks currently used in rasterization, you can achieve good results, but this certainly cannot be called a universal and ideal method for rendering 3D graphics. Soon enough, in the pursuit of realism, real-time 3D application developers will reach the limit of the existing rasterization method, and they will have to switch to a method with an advanced lighting model similar to what happens in reality. Most likely, it will be ray tracing. But since ray tracing is a very expensive method and is unlikely to be supported by even the most powerful systems, you should initially rely on hybrid rendering methods that combine rasterization performance and ray tracing quality.

Hybrid rendering for transition

Due to the demanding nature of ray tracing, even with a small number of rays to be calculated for each pixel, this method is unlikely to be used exclusively and will not yet replace rasterization. But there is an option to mix the two methods. For example, the underlying geometry can be rasterized with high performance, and then only soft shadows and reflections can be rendered using ray tracing. Although rasterization will continue to play a critical role in the coming years with the advent of hybrid rendering, the share of ray tracing algorithms in such engines will gradually increase based on the increasing computing capabilities of future GPUs.

This approach has long been used in the same cartoons of the company Pixar, despite the fact that their requirements seem to have no strict restrictions on rendering time. However, it is easier and faster to render geometry using the same micropolygons of the rendering system Reyes, and use tracing only where specific effects are needed. Almost all Pixar animated films previously used micropolygons and rasterization, and ray tracing to the rendering engine RenderMan added later for the cartoon "Cars", where it was used selectively - to calculate global occlusion (ambient occlusion) and render reflections.

But in reality, hybrid solutions are not so simple, because for effective ray tracing you need to organize the data structure in a special way to reduce the number of checks for the intersection of rays with scene objects. Therefore, even with hybrid rendering, you will have to create an optimized data structure. And on the performance side, a big issue is the memory access associated with the secondary rays that are needed for hybrid rendering. It turns out that when combining two rendering methods, many of their disadvantages are combined, in particular, the simplicity of the ray tracing method and high rasterization performance are lost.

But when the advantages still outweigh, such a hybrid approach makes sense. A combination of some rasterization and tracing capabilities is already available, including hardware-accelerated GPU lightmap preparation, rendering of dynamic lightmaps and partial shadows, rendering of reflections and translucent objects with refraction. This is already a great achievement, since this approach has been available only for offline rendering for many years. Back in the late 90s, hybrid rendering was used in animated films to improve efficiency, and now it is becoming available for real-time applications.


But this is just the beginning before the coming “Golden Era” of real-time rendering. In the future, this hybrid approach will evolve into something more, and instead of selective effects, it will be possible to use full-fledged techniques with advanced lighting, shading and various complex effects.

In much the same way that offline rendering went from "Bug's Life" to much more complex animated films, like "Coco", which already uses a full-fledged path tracing with tens, or even hundreds of calculated rays per pixel. Unlike previous years, there were no shadow maps or separate passes for calculating lighting, but only full-fledged tracing - this is what game developers are striving for, it’s just that their path will be a little longer, but the goal is the same.


And before the transition from rasterization to full tracing occurs, you need to use hybrid rendering and change your development approach in many ways. For example, outsource some of the work on preliminary preparation and “baking” some data in the GPU, remake your production pipeline and prepare rendering engines for the fact that an increasing part of the calculations will gradually switch to tracing. And partial benefits of tracing can be used now, albeit with an extremely small number of rays per pixel and with mandatory noise reduction.


But even with a gradual transition to tracing, there is no need to discard the need for optimizations that are not specific to rasterization. High-level optimizations like level of detail (LOD), occlusion culling, tiling, and streaming will also work great with ray tracing. And until the industry moves to full-fledged tracing, we need to continue to use effective techniques using screen space where high performance is required and quality is not critical.


Well, rendering using ray tracing needs to be optimized. For example, when rendering dynamic lightmaps using DXR, it is effective to cache the lighting in the lightmaps, and then use the accumulation of data over time for the next frame. The process is relatively fast and should be used since ray tracing in lightmap space provides best result, compared to screen-space ray tracing. True, you will have to use noise suppression, since it will not be possible to calculate particularly many rays in real time.

Even ordinary filters for noise suppression with settings specifically for the specifics of ray tracing work well, and if you apply noise reduction using the capabilities of neural networks, which Nvidia has already demonstrated, and even hardware accelerated on the tensor cores of Volta architecture GPUs, then the future of hybrid rendering It seems pretty clear that at least some of the effects that can be easily added to existing engines (shadow calculations or global illumination and shading) that use rasterization will be added to games quite soon.

So, the obvious way to use hybrid rendering is to rasterize the scene and use ray tracing for only part of its lighting calculations, as well as for calculations of reflections with refractions. This approach provides rasterization speed and tracing quality in the form of accurate simulation of lighting, including global lighting, reflections and refractions of light rays, and drawing optically correct shadows. Moreover, simulating these effects using rasterization hacks and making them more complex will someday lead to the point where it becomes so resource-intensive that it will be easier to replace the calculations with real ray tracing. And in general, this is the only correct way if we look into the future of graphics development.

DirectX Raytracing - standard ray tracing API

So, over time, they learned to make rasterization very impressive by adding various algorithms and hacks like parallax mapping, which adds volume to not too complex surfaces, as well as using shadow maps. To improve graphics, it was only necessary to increase the speed of GPUs and make them a little more universal, leaving the basis in the form of rasterization practically untouched (not counting optimization methods in the form of breaking the frame into tiles, etc.).

Modern techniques like screen space reflections and global illumination simulation have pushed rasterization to its practical limits, as these algorithms require clever processing hacks and complex calculations, sometimes performed asynchronously with rendering. And in the near future, the complexity and resource intensity of such algorithms will continue to grow. Ray tracing allows you to do complex effects in a simple way, also opening the door to completely new techniques not previously possible in real-time rendering. But how can this be achieved if GPUs can only rasterize?

Current version DirectX 12 It only seems quite new, but in fact this graphics API was announced back at GDC 2014, and was released publicly as part of Windows 10 a year later. Until now, the use of this version is far from what was desired, and this happened for many reasons. Firstly, the development cycle for games and engines is quite long, and the fact that DirectX 12 only works on the latest version of Windows and has limited support on current generation consoles only reduces the arguments in favor of using it on PC. However, we have already seen the use of low-level APIs in several games, but what next? And then the DirectX development line took another sharp turn, introducing tools to support ray tracing.

As part of the Game Developers Conference GDC 2018 Microsoft has introduced a new addition to the DirectX API, in which many partners involved in software and hardware development have participated in one way or another. The addition is called DirectX Raytracing and its name suggests that it is a standard API for software and hardware support for ray tracing in DirectX applications, allowing developers to use algorithms and effects using the mentioned technique. DirectX Raytracing (DXR for short) provides a standardized approach for implementing ray tracing that is accelerated by GPUs. This extension combines with the capabilities of the existing DirectX 12 API, allowing you to use both traditional rasterization and ray tracing, as well as mix them in desired proportions.

All DXR API work related to ray tracing is controlled using lists of commands sent by the application. Ray tracing is tightly integrated with rasterization and compute commands and can be run multi-threaded. Ray tracing shaders (five new types of shaders!) are controlled similarly to compute shaders, allowing them to be processed in parallel on the GPU, controlling their execution at a relatively low level. The application is fully responsible for synchronizing the work of the GPU and the use of its resources, both during rasterization and calculations, which gives developers control over optimizing the execution of all types of work: rasterization, ray tracing, calculations, data transfer.

Different rendering types share all resources such as textures, buffers and constants without requiring conversion, transfer and duplication to be accessed from trace shaders. Resources storing ray tracing-specific data, such as acceleration structures (data structures used to speed up tracing - finding intersections between rays and geometry) and shader tables (describing the relationship between ray tracing shaders, resources and geometry) are entirely managed by the application, DXR itself The API does not do any data movement of its own accord. Shaders can be compiled individually or in batches, their compilation is completely controlled by the application and can be parallelized across multiple CPU threads.

At the highest level, DXR adds four new concepts to the DirectX 12 API:

  1. Acceleration structure ( acceleration structure) is an object that represents a 3D scene in a format that is optimal for calculating rays on GPUs. Presented as a two-level hierarchy, this structure provides optimized ray rendering on the GPU and efficient modification of dynamic data.
  2. New method list of commands ( command list) entitled DispatchRays is the basis for ray tracing in the scene. This is how the game transfers DXR workloads to the GPU.
  3. A set of new ray shader types that define what DXR will calculate. When DispatchRays is called, the ray generation shader is launched. When using a new feature TraceRay in HLSL, the ray generation shader sends a ray into the scene, and depending on where the ray hits in the scene, one of several hit shaders can be called at the intersection point ( hit) or miss ( miss), which allows you to assign each object its own set of shaders and textures and create unique materials.
  4. A tracing pipeline state added to existing graphics and compute pipeline states, translating ray tracing shaders and other states relevant to tracing workloads.

Thus, DXR does not add a new GPU engine to the existing graphics and compute engine in DirectX 12. The DXR workload can be run on existing engines, since DXR is a computational task at its core. DXR tasks are presented as compute workloads because GPUs are becoming more versatile anyway and are capable of performing almost any task that is not necessarily graphics related, and in the future most of the fixed functions of the GPU will likely be replaced by shader code.

When using DXR, the first step is to build acceleration structures at two levels. At the lowest level of the structure, the application defines a set of geometric data (vertex and index buffers) that define the objects in the scene. At the top level of the structure, a list of descriptions is defined that contains references to specific geometric data, as well as additional data such as transformation matrices, which are updated every frame, similar to how games do it to dynamically change objects. This ensures efficient traversal of a large number of complex geometries.

The second step when using DXR is to create a trace pipeline state. Modern games group draw calls ( draw calls) to increase the efficiency of their execution in special groups- packages ( batch), for example, drawing all metal objects in one batch, and all plastic objects in another. But when tracing, it is impossible to know in advance exactly what material a particular beam will hit, and batches cannot be applied. Instead, the routing pipeline state allows multiple sets of routing shaders and texture resources to be assigned. This way you can specify, for example, that all ray intersections with one object should use such and such a specific shader and such and such a texture, and intersections with another object should use a different shader and a different texture. This allows the application to use the correct shader code with the correct textures for the materials that are hit by the rays.

The last step in DXR is to call DispatchRays, which calls the shader to generate the ray. Inside it, the application makes calls to the TraceRay function, which causes the acceleration structure to be traversed and the corresponding shader executed on a hit or miss (two different types shaders). TraceRay can also be called from within these two shaders when using ray recursion or effects with multiple bounces.


Why not use computational shaders already known to us from DirectX for ray tracing? Firstly, DXR allows you to run separate shaders when rays hit and miss, and secondly, the rendering process can be accelerated on GPUs (using Nvidia RTX or analogues from competitors), and thirdly, the new API allows you to bind resources using shader tables.

Nvidia RTX is a set of software and hardware algorithms that accelerate tracing on Nvidia solutions based on graphics architecture Volta. Why aren't previous architectures that aren't all that different from Volta supported? Perhaps this is partly marketing ploy, to attract buyers to new products, and perhaps Volta has some hardware optimizations that can seriously speed up ray tracing on the GPU, which we have not yet been told about. Yes, the only GPU with this architecture so far has tensor cores that accelerate artificial intelligence tasks, but if it can be used in rendering with ray tracing, then only in the process of noise reduction, and even then - according to available data, in existing noise reduction algorithms Such possibilities have not yet been applied.

DXR and RTX benefit from a powerful and flexible programming model, similar to Nvidia OptiX, that makes it relatively easy to write efficient ray tracing algorithms. To start developing applications using DXR ray tracing, hardware accelerated with RTX, you will need a graphics card based on the Volta architecture (currently only Titan V) and driver version 396 or higher, as well as operating system Windows 10 RS4 and the Microsoft DXR Developer Kit containing everything you need. It will also be useful for debugging Microsoft PIX or NSight Graphics Nvidia companies that already have DXR API support.

For ease of development and debugging, Microsoft immediately released a new version of the utility PIX for Windows with support for DXR capabilities. This tool allows you to capture and analyze frames built using DXR so that developers understand exactly how DXR works with hardware, catch any errors and optimize their code. With PIX, programmers can explore API calls, view the state of objects and resources associated with tracing work, and view acceleration structures. All this helps a lot when developing DXR applications.


Ultimately, the DirectX Raytracing API complements the developer experience with specialized shaders and structures that are convenient for ray tracing, the ability to work simultaneously with the rest of the traditional graphics pipeline and compute shaders, etc. Conceptually, this is not much different from what Imagination Tech offered for several years back to OpenRL and its hardware solutions. Alas, ImgTec was too far ahead of its time with its PowerVR Wizard chips, but you need to have enough funds not only for initial development, but also for promoting your brainchild. DXR is an API of such a large and generally recognized company as Microsoft, and both manufacturers of gaming GPUs (Nvidia and AMD, and maybe Intel will soon be added to them, who knows) are already working together with Microsoft to optimize the new API for their hardware architectures .

Like all closed APIs, DXR also has a certain disadvantage in that the developer simply does not know how certain things work inside the API, what specific accelerating structures are used to ensure efficient parallel rendering on GPUs, and what are the advantages and disadvantages , what characteristics (memory consumption, latency, etc.), how the ray tracing scheduler works, is a balance achieved between memory usage, latency, register usage, etc., what part of the tracer’s work is performed in hardware on the GPU, and what in driver and API. All such solutions suffer from their closed nature, and DXR is no exception.

By the way, there is an alternative to using the DXR API - Nvidia employees are working on expanding the multi-platform Vulkan API, designed for ray tracing - VK_NV_raytracing. The development team collaborates with colleagues from Khronos to create a multi-platform open standard, and one of the main goals is to try to make ray tracing in DirectX and Vulkan work as similar as possible.

Games that use rasterization often look very believable and realistic, as their developers spent a lot of time adding all the necessary effects and algorithms that simulate the propagation of light rays in reality. And in the early years, DXR's capabilities will also be used to complement existing rendering techniques, such as screen-space reflections - to fill in data about hidden geometry not visible within the screen, which will lead to an increase in the quality of these effects. But over the next few years, you can expect to see an increase in the use of DXR for techniques that are not used in rasterization, such as full global illumination. In the future, ray tracing may completely replace rasterization when rendering 3D scenes, although rasterization will remain the ideal balance between performance and quality for a long time.

At the moment, only Nvidia solutions of the Volta family (using RTX technology) have full hardware support for DirectX Raytracing, that is, today only the expensive Titan V, and on the previous GPUs of this company, as well as on AMD GPUs, ray tracing is fully performed using compute shaders - that is, only basic DXR support is available with lower performance. However, AMD has already stated that they are working together with Microsoft to implement hardware tracing acceleration and will soon provide a driver to support it, although for now it seems that existing AMD architectures are unlikely to be able to provide a high level of acceleration similar to Nvidia Volta. RTX hardware-accelerated ray tracing technology leverages the Volta architecture's yet-to-be-announced hardware ray tracing acceleration capabilities, and is expected to support gaming solutions later this fall.

Looking even further into the future, the emergence of APIs for accelerating rasterization goes somewhat counter to the general universalization of GPUs, which are becoming more and more similar to conventional processors designed for any type of computation. For many years there has been talk about completely removing from the GPU all blocks that perform fixed functions, although this has not worked out very well so far (you can remember the not very successful Intel Larrabee). But in general, the greater programmability of GPUs will make it even easier to mix rasterization and tracing, and full tracing may no longer require any APIs to support hardware acceleration. But this is looking too far ahead, for now we are dealing with DXR.

DirectX Raytracing and support for this API extension by software and hardware developers provide practical possibility using ray tracing in combination with the usual “rasterization” API. Why is this necessary, since modern GPUs are already capable of performing almost any calculation using compute shaders, and developers can perform ray tracing using them? The whole point is to standardize the capabilities of hardware acceleration of tracing on specialized units in the GPU, which will not happen if universal computational shaders not intended for this are used. Some new hardware capabilities of modern graphics architectures allow for faster ray tracing and this functionality cannot be exposed using the existing DirectX 12 API.

Microsoft remains true to itself - like the rasterization part of DirectX, the new API does not define exactly how the hardware should work, but allows GPU developers to accelerate only certain Microsoft standardized capabilities. Hardware developers are free to support executing DXR API commands the way they want, Microsoft does not tell them exactly how GPUs should do it. Microsoft introduces DXR as a compute task that can be run in parallel with the "rasterization" part, and DXR also brings several new types of shaders for ray processing, as well as an optimized structure for the 3D scene, convenient for ray tracing.

Since the new API is aimed at software developers, Microsoft is providing them with a base level of ray tracing support in DXR that can use all existing hardware that supports DirectX 12. And the first experiments with DXR can be started on existing GPUs, although it will not be fast enough for use in real applications. All hardware with support for DirectX 12 will support ray tracing and some simple effects can be done even with the existing base of video cards in the hands of players. We will see some effects using DXR in games this year, but definitely in 2019 - at least as an early demonstration of the capabilities of new technologies.

It is likely that the initial performance of tracing on different GPUs will vary greatly. Solutions without native support, using a basic level of support through compute shaders, will be very slow, and GPUs with hardware tracing support will immediately speed up the process several times - just like in the good old days of the initial development of hardware rasterization support. Over time, more and more calculations during tracing will be performed more optimally and significantly more efficiently, but this will require new graphics solutions. The first of which should appear in the coming months.

A side-by-side comparison of rasterization and tracing

Let's try to look at specific examples what ray tracing can provide. In fact, it is already used in games now, but in slightly different, more primitive forms. In particular, in algorithms using screen space or the voxel cone tracing algorithm when calculating global illumination, including the well-known algorithm Voxel Ambient Occlusion (VXAO) Nvidia company. But this is still not full-fledged ray tracing, but rather hacks with its use in one form or another during rasterization, and today we are talking about full-fledged ray tracing for the entire geometry of the scene.

Modern GPUs are already quite powerful and are capable of tracing light rays at high speed using software such as Arnold (Autodesk), V-Ray (Chaos Group) or Renderman (Pixar), and many architects and designers are already using hardware accelerated ray tracing to quickly create photorealistic renderings of their products, reducing costs in the overall development process. Nvidia has been involved in the development of hardware-accelerated ray tracing techniques in the professional world for over a decade, and now the time has come to bring these capabilities to games.

To help game developers implement ray tracing, Nvidia announced an upcoming addition to GameWorks SDK such features as specific noise reduction algorithms, high-quality global shading, shadows from area light sources ( area lights) and an algorithm for drawing high-quality reflections.

The best ray-traced renderings require a large number of samples (rays calculated per pixel) to achieve high quality—hundreds to thousands! It depends on the complexity of the scene, but even a few dozen rays are not suitable for real-time calculations, since even near-future GPUs with hardware tracing support will be able to provide acceptable performance with a much smaller number of rays per pixel - only a few. Is there any point in bothering?

Yes, if you further process the resulting image (we wanted to get away from rasterization hacks, but it looks like we’ll have to put up with others for now). In particular, performing tracing on a productive Volta architecture solution allows for real-time performance when calculating 1-2 samples per pixel with the mandatory use of noise reduction. There are already existing denoising algorithms that can significantly improve image quality after ray tracing, and these are just the first developments that are ongoing.

The requirements for real-time noise reduction algorithms are quite high; you need to be able to process very noisy input images with an extremely low number of rays per pixel (down to 1 sample), provide stable quality in motion using information from previous frames, and execute extremely quickly without spending more 1 ms GPU time. Existing Nvidia algorithms can achieve very good results when rendering reflections, soft shadows and global occlusion. For each effect, specific algorithms are used, which also use information about the 3D scene.


Ray tracing was used to render shadows with one sample per pixel and noise reduction enabled


To calculate global occlusion, we used two rays per pixel with noise reduction


And when rendering reflections, only one ray per pixel was calculated; noise reduction is also indispensable

Ray Tracing Denoiser as part of GameWorks SDK is a set of libraries for using several fast ray tracing techniques that use noise reduction, which is very important for tracing with a small number of rays per pixel, since the result is usually extremely noisy. The algorithms include rendering soft shadows from area light sources and algorithms for rendering reflections and global shading ambient occlusion. Using noise reduction allows you to achieve high speed with a small number of samples per pixel, but the image quality remains excellent - much better than the techniques currently used to simulate the propagation of light across the scene and use of screen space.

Let's talk about the benefits of ray tracing when rendering shadows. Using tracing, you can draw physically correct shadows with soft edges, much more realistic than the most sophisticated techniques available using shadow maps and filtering. Even for very large light sources, realistic soft shadows are obtained without the flaws encountered in rasterization.


Ray-traced shadows


Shadows obtained using rasterization and shadow maps

You can also use algorithms that are impossible or complex when simulating shadow maps, such as shadows from area light sources. And most importantly, this completely eliminates all possible visual artifacts: flickering pixels at the edges, jagged lines, etc. Yes, during the development of rasterization, many hacks have already been invented to suppress artifacts, but ray tracing does everything naturally.

To calculate global shading ( Ambient Occlusion) I would also like to use ray tracing, since it provides significantly higher quality compared to all existing techniques in screen space (all these SSAO, HBAO and even VXAO). Almost all algorithms used today simply add darkness to the corners found in a flat picture, only simulating the propagation of light, and the use of tracing allows this to be done in a physically correct manner.


Global occlusion using ray tracing


Global shading by simulating the effect using screen space

Moreover, all techniques using screen space ignore the impact of geometric objects outside the scene and behind the camera, and also add the same shading to completely different surfaces. In the example shown above, many of these problems are clearly visible - it is clear that this is just an attempt to simulate the propagation of light in a 3D scene, but tracing achieves a noticeably more photorealistic look.

When rendering reflections Tracing can also provide noticeably better quality than current methods that use screen space, which lack off-screen data (they are physically unable to draw in the reflection what is not visible on the screen) and which draw highlights on reflections incorrectly - from -due to the fact that the direct direction of view is used, and not the reflected one.


Reflections obtained using ray tracing


Reflections resulting from screen space rasterization

Nvidia's example may be over-exaggerated and too obvious about the problems with reflection techniques that use screen space, but the point is clear - physically correct reflections can only be rendered using ray tracing. Other methods for rendering reflections are not universal and provide inferior quality - for example, plane reflections only work on flat surfaces. But the tracing method also has a drawback - with a small number of samples, noise reduction will be required, since with one calculated ray per pixel the picture comes out extremely noisy.

It turns out that at the moment noise reduction should always be used, and the current version of specific techniques with Nvidia noise reduction has its own limitations and disadvantages. For example, a shadow rendering technique will produce degraded occluded shadows from two shadow-casting objects with large differences in distance from the surface being shadowed. The reflection rendering algorithm degrades in quality with increased surface roughness, and the global occlusion rendering algorithm may require not one, but two or even more calculated rays per pixel to render fine details.

But these are just the beginning versions of techniques using noise reduction filters, which will continue to improve in both quality and performance. In addition, in the future it is possible to use noise reduction using artificial intelligence technologies, which are already included in the Nvidia OptiX 5.0, but is not yet used when tracing with RTX. It is likely that in the future a single noise reduction will be used for all lighting components at once (rather than three separate ones, as is done now) to reduce memory costs and performance. There is also nothing stopping you from using a hybrid approach to rendering, using elements of screen space algorithms with additional ray tracing.

In addition to using ray tracing in real-time game engines, the power of GPU-accelerated DXR can also be used in content creation. For example, for high-quality lighting calculations, which are then placed in light maps, for creating pre-rendered scenes in a game engine, but with higher quality, etc. Moreover, you can use ray tracing not for rendering at all, but in sound engines for virtual reality ( Nvidia VRWorks Audio), in physical calculations or even in artificial intelligence algorithms.

Ray tracing is useful in the process of creating content: fine-tuning the characteristics of materials with high-quality and fast rendering, adding and adjusting the characteristics of light sources, debugging noise reduction algorithms, etc. You can also get an even higher-quality offline render using the same structures with relatively little effort and resources as the real-time engine. For example, this has already been done in Unreal Engine 4- Nvidia itself wrote the experimental Path Tracer immediately after integrating DXR capabilities into the engine, which, although it does not yet provide sufficient quality for full-fledged offline rendering, shows such a possibility.

We're not even talking about the possibility of quickly and efficiently preparing light maps - "baking" light into special lighting maps (lightmaps) for static objects in the scene. Such an engine can use the same code in the game and the editor and provide preparation various types Light maps (2D, 3D) and environmental cube maps.


This is important not only because ray tracing will speed up the process of final generation of lightmaps, it will also provide a better preview of such light maps, allowing you to quickly change the location and characteristics of light sources and objects in the scene, immediately getting the result on the screen - almost the same what the final lighting will be like.

Finally, we suggest looking at all the advantages of ray tracing in dynamics. Nvidia has released a collection of technology demos showing the benefits of hardware-accelerated ray tracing using Nvidia technologies RTX using the DXR API (only in the form of a video on Youtube, alas).

The demo clearly shows the benefits of rendering traced shadows, including soft and colored ones, the difference in global occlusion quality when using rasterization and screen space compared to ray tracing, realistic reflections on various types of materials with multiple reflections, clever noise reduction systems and the use of tracing when preparing pre-rendered static light maps.

Demonstration of ray tracing capabilities

To demonstrate the capabilities of the DirectX Raytracing API and Nvidia RTX technology, several leading game engine and benchmark developers released their technology demos for GDC 2018, showing some of the capabilities of new technologies using ray tracing: 4A Games, Electronic Arts, Epic Games, Remedy Entertainment, Unity and others. Alas, for now they are only available in the form of screenshots, presentations and videos on Youtube.

Whereas previously similar demonstrations of real-time ray tracing were shown either in very simple scenes with simple effects or at low performance, the capabilities of future GPUs can make ray tracing real even in gaming conditions with acceptable performance. Developers Epic Games and Remedy Entertainment believe that DXR and RTX capabilities will bring better graphics to future games, and implementing basic support for the new API in their engines has proven to be relatively straightforward.

DirectX Raytracing tech demo (Futuremark)

For example, the company known to all 3D graphics enthusiasts for its test packages Futuremark showed a technology demo of DXR, made using a specially developed hybrid engine using ray tracing for high-quality reflections in real time.

We have already said that when using currently common methods, drawing realistic and physically correct reflections in a 3D scene is very difficult; in the process of creating algorithms, developers face a lot of difficulties that end up costing different methods, but the reflections remain far from ideal. Over the past few months, developers at Futuremark have been exploring the use of DXR in hybrid rendering and have achieved some pretty good results.

Using hardware-accelerated GPU ray tracing, they obtained physically correct reflections for all objects in the scene, including dynamic ones. Open the next few pictures at full size, as they are GIF animations that clearly show the difference between tracing and more conventional methods using screen space:

The difference is obvious. In addition to differences in reflection detail, using DXR tracing you can get reflections of objects that exist outside the screen space, i.e., not within the viewing range of the game camera, as can be seen in the comparative screenshots, and the reflection itself generally looks much more believable. Here is another example, perhaps less obvious, but quite giving an idea:

The use of ray tracing produces accurate, perspective-corrected reflections on all surfaces in the scene in real time. It is clearly visible that the tracing is much closer to realism than the more familiar screen-space reflections used in most modern games. Here's another comparison:

If you do not look at the reflections obtained using DXR, then conventional methods may seem to give good quality, but they only seem so. Moreover, reflections are important not only for mirrors with a high reflectivity, but also for all other surfaces - they all become more realistic even if it is not immediately visible.

In its demo, Futuremark uses ray tracing capabilities only to solve problems that are difficult to combat with conventional methods, such as reflections of dynamic objects located outside the main screen space, reflections on non-planar surfaces, and perspective-corrected reflections for complex objects. Here are higher quality screenshots from the DXR demo:




Modern GPUs can already use hybrid rendering, using rasterization for most of the work and relatively little input from tracing to improve the quality of shadows, reflections and other effects that are difficult to handle using traditional rasterization techniques. And the Futuremark demo program just shows an example of such an approach; it works in real time on an existing GPU, albeit one of the most powerful.

The main thing is that according to the developers from Futuremark, it was quite easy for them to implement ray tracing support into the existing DirectX 12 engine from the benchmark 3DMark Time Spy, using models and textures from their tests. Along with the tech demo, well-known 3D test developers announced the use of DirectX Raytracing capabilities in their next 3DMark benchmark, which is planned to be released towards the end of this year.

Reflections Real-Time Ray Tracing Demo (Epic Games)

Company Epic Games together with ILMxLAB And Nvidia also showed her option to include real-time ray tracing capabilities in the engine Unreal Engine 4. The screening took place at the opening of GDC 2018, where three of these companies presented an experimental film-realistic demonstration on the theme of the film series "Star Wars" using characters from the series "The Force Awakens" And "The Last Jedi".


The Epic Games demo uses a modified version of Unreal Engine 4 and Nvidia RTX technology, the capabilities of which are revealed through the DirectX Raytracing API. To build the 3D scene, the developers used real resources from films Star Wars: The Last Jedi With Captain Phasma in shining armor and two stormtroopers with a scene in the ship's elevator First Order.

The tech demo in question features dynamically changing lighting that can be adjusted on the fly, as well as ray-traced effects, including high-quality soft shadows and photorealistic reflections - all rendered in real time and with very high quality. This kind of picture quality is simply not available without the use of ray tracing, and now the familiar Unreal Engine can provide it, which the founder and president of Epic Games was very impressed with Tim Sweeney.

Advanced techniques in the technology demo include: area lights including soft shadows rendered using ray tracing, as well as reflection and global occlusion rendering using ray tracing, noise reduction of the result of ray tracing from the Nvidia GameWorks package, as well as a high-quality depth of field effect ( does not use tracing, but is also cute).


The screenshots and video show the very high quality of all these effects, and what is especially impressive are the realistic reflections, of which there are a lot in the scene. All objects are reflected in all objects, which is very difficult, if not impossible, to render when rasterizing. The method of rendering reflections in screen space would only give an imitation of reality, in which everything that was not included in the frame would not be reflected, and the rest would be very difficult to render qualitatively.

In addition to reflections, one can note the softest shadows, which do not catch the eye with their torn and/or very sharp edges, as happens when using shadow maps. Well, the post-processing here is very high quality. In general, the developers did their best and this demonstration turned out to be perhaps one of the most impressive for hardware accelerated ray tracing.

To create this demo, Epic Games worked closely with artists from ILMxLAB and engineers from Nvidia to demonstrate the capabilities of Nvidia RTX technology running through the DXR API. Unreal Engine demo running in real time on a workstation DGX Station Nvidia, which includes as many as four Volta architecture GPUs. Combining the power of the Unreal Engine, the DXR ray tracing graphics API, and Nvidia RTX technology running on the Volta family of GPUs, we get closer to real-time cinematic realism.

In addition to the technology demonstration, specialists from Epic Games held a large hour-long session at GDC "Cinematic Lighting in Unreal Engine", dedicated to the new features of their engine. And the demo itself is shown to everyone with the opportunity to watch the scene in various modes, including wireframe rendering. We can assume that all this will sooner or later be available in games, because the Unreal Engine is very popular. Epic Games promised to provide access to the capabilities of the DXR API this year - probably closer to the fall, when new Nvidia GPUs are released.


Support for DirectX Raytracing and Nvidia RTX opens the way for Unreal Engine 4 to a new class of techniques and algorithms that were not previously available with the dominance of rasterization. In the near future, game developers will be able to use a hybrid approach, using some high-quality ray tracing for some effects and high-performance rasterization for most of the work. This is a good foundation for the future, because the capabilities of GPUs related to effective acceleration of ray tracing will only grow.

Pica Pica — Real-time Raytracing Experiment (Electronic Arts/SEED)

The latest developer interested in ray tracing via DXR is the studio SEED from Electronic Arts, which created a special demo program Pica Pica, using an experimental engine Halcyon, which uses hybrid rendering like the previous demos. This demo is also interesting because it created a procedural world without any preliminary calculations.

Why did SEED researchers decide to use hybrid ray-traced rendering? They experimentally found that this method can produce a much more realistic image compared to rasterization, very close to full-fledged ray tracing (path tracing), which is overly resource-demanding or produces a too noisy image with a small number of calculated samples. All this can be clearly seen from the comparative screenshots:


Full tracing


Hybrid rendering


Rasterization

In modern games, various hacks are used to calculate reflections and lighting, including preliminary calculation of lighting (its static part, at least). All this requires additional work from level designers, who cleverly place fake light sources, start pre-calculation of lighting, which is then recorded in lightmaps. And using ray tracing for rendering tasks makes it possible to avoid this additional work, because ray tracing allows you to naturally calculate everything you need, as we already described above.

And since full tracing is not yet possible, the Halcyon engine uses a hybrid approach. To calculate deferred shading, rasterization is used, to calculate direct shadows you can use either rasterization or ray tracing if necessary, for direct lighting you use compute shaders, for reflections you can also use both the traditional approach and tracing, for global illumination you always use tracing, and for To simulate ambient occlusion, you can either rely on conventional screen methods such as SSAO or also enable ray tracing. Only tracing is used to render transparent objects, and compute shaders are used for post-processing.


In particular, ray tracing is used to calculate shadows and reflections - much better and more natural than with currently common techniques. For example, such reflections in general cannot be made using algorithms for calculating reflections when rasterizing and using screen space:


Ray tracing when calculating reflections occurs at half resolution, that is, 0.25 rays/pixel is used for reflections and 0.25 rays/pixel for shadows. And here the problem of a small number of calculated rays appears in the form of an extremely noisy picture with reflections, when without a special additional processing The ray tracing result looks too rough:


Therefore, after tracing, the picture is reconstructed to full rendering resolution in a special way - several very clever algorithms (details can be found in the presentation of the development team at GDC 2018), when the received data is filtered and information from previous frames is additionally collected and taken into account. The result is a completely acceptable result with realistic reflections, not much different from a full-fledged path tracing:


But maybe the usual methods in screen space will give no worse results and we simply don’t need “expensive” tracing? Check out this side-by-side comparison: On the left are screen-space reflections, in the middle is hybrid ray tracing, and on the right is a reference render with full ray tracing:


The difference is obvious. The screen space method is very approximate, unrealistic and only simulates reflections, although not bad in some places, but with obvious artifacts and problems of lack of resolution. There is no such problem with tracing, even taking into account the reduced resolution when rendering rays. In Pica Pica, ray tracing is also used to render transparent and translucent objects. The demo program calculates light refraction without the need for pre-sorting, as well as subsurface light scattering:

So far, the engine has not been fully developed and it has one drawback that is important for photorealism - it cannot yet draw shadows from translucent objects, but this is a matter of time. But the demo uses a global illumination algorithm that does not use preliminary calculations and supports both static and dynamic objects, minimizing the need for additional work on the part of artists:


Global Illumination Off


Global Illumination enabled

Global illumination significantly affects some objects in the scene, adding realism to their lighting. In the demo, you can also additionally use techniques to simulate global shading, giving additional shadows. Algorithms in screen space are also supported - Screen Space Ambient Occlusion (SSAO):


Perhaps it would have worked even better with something like VXAO, which Nvidia is promoting, but it already looks quite good. But the image will be even better and more realistic with a full calculation of global shading using ray tracing. Look at the comparison pictures, the difference is striking:



While SSAO gives only a semblance of global shadows, obscuring only the most obvious corners, then full tracing does everything perfectly, giving deep shadow where it should be, based on the laws of light propagation.

As for shadows from direct rays of light sources, with hard shadows when tracing everything is quite simple - rays are launched in the direction of the light sources and hits are checked. For soft shadows, the algorithm is similar, but the result with one sample per pixel is too “noisy” and has to be additionally filtered, after which the picture becomes more realistic:


Hard shadows, soft unfiltered and soft filtered shadows

The developers from the SEED studio specifically note that although their research into hybrid rendering is at an early stage, this approach allows replacing numerous hacks with a bunch of objective shortcomings with a unified ray tracing approach that provides better rendering quality. What is especially important is that now software developers have a single, generally accepted API for ray tracing, and only further refinement of the algorithms is required both to improve the quality of rendering and to optimize its performance, since ray tracing remains fairly demanding on hardware.

At the moment, the Pica Pica demo program calculates only 2.25 rays per pixel (in total, including all effects), and the result is a photorealistic image with quality close to full tracing, although with some limitations. And now - the fly in the ointment: as in the case of the Epic Games demo, to speed up the rendering process, we still have to use the capabilities of several top GPUs simultaneously and transfer a minimum amount of data over the relatively slow PCI Express bus. But further development of hardware acceleration on GPUs should help rid us of such system requirements in the future.

Experiments with DirectX Raytracing in Northlight (Remedy Entertainment)

Another demo program to promote DXR and RTX, presented at GDC 2018, was experiments with the game engine Northlight Engine Finnish company Remedy Entertainment, known to the public for games such as Max Payne, Alan Wake and Quantum Break. The Northlight Engine is intensively developed by a company known for its interest in the latest graphics technologies. Therefore, it is no wonder that they became interested in hardware-accelerated ray tracing.

At GDC, the company showed off developments they were working on with Nvidia and Microsoft. Among several developers, Remedy received early access to Nvidia's RTX and DXR API capabilities, which were embodied in a special version of the Northlight engine. The chief graphics programmer of Tatu Aalto presented a speech at the conference "Experiments with DirectX Raytracing in Remedy's Northlight Engine", in which he spoke about the features of the hybrid approach they adopted.


This demo traditionally uses rasterization for speed and ray tracing for some effects that would be difficult to do otherwise. Quality improvements include physically based soft shadows, high-quality global occlusion and lighting, and realistic reflections. The video shows the result of the Northlight Engine with all effects enabled, calculated using ray tracing with an increased number of calculated rays:

To conduct experiments on introducing ray tracing, Remedy created a new scene that was in no way related to the company’s games. As we already said, the DXR API supports two levels of acceleration structures: lower and upper. The idea is that the bottom level structure is for storing geometry, and the top level contains the lower level structures. That is, each polygonal mesh is one lower-level structure, and each upper level contains several lower-level structures with possible geometric transformations (rotations, etc.).


The lower level structure is needed for the static parts of the scene; the red squares in the diagram are the boundaries of the lower level tree. For example, in a scene there are four examples of a small chair (small red squares) that have the same geometry but their own geometric transformations. Medium squares are small sofas, large squares are large round sofas. To create a ray tracing scene, you need to insert these lower-level structures into a top-level structure, for which the DXR API has a special function that accepts multiple instances of the lower-level structure with transformations.

Working with dynamically changing geometry is made a little more difficult, since the lower-level builder only accepts static buffers. But deformation is also possible here - with the help of a computational shader that accepts geometry and skinning matrices and writes the already changed geometry. Next, you can begin work on calculating the rays.

First, let's take ambient occlusion, a visibility-based algorithm that can be easily implemented using ray tracing. The following image was produced by rendering four rays per pixel, with a maximum length set to four meters, and the result certainly looks better than the SSAO method, which uses only screen space.


The traditional method of calculating global occlusion is indicated on the left half, and ray tracing is indicated on the right. While the SSAO technique does a decent job of capturing some edges, it clearly lacks geometric information about the scene—such algorithms don't know what's off-screen or behind surfaces visible to the camera. Therefore, the result is clearly not ideal, although it is clearly better than without shading at all.

Unfortunately, the performance of ray tracing is relatively low and it is much more expensive than screen space methods. According to Remedy, in their demo program, rendering one ray per pixel for global occlusion with a maximum length of 4 meters at Full HD resolution takes approximately 5 ms and performance scales almost linearly, so rendering 16 rays will take about 80 ms. With constant improvement of quality, of course:


These screenshots were taken with regular full-screen antialiasing, taking into account the time component (data from previous frames), and without tricky filtering, as is done in most of the other demos shown at GDC. With clever noise reduction, you can achieve acceptable quality with 1-2 rays per pixel.

In addition to global shading, the Remedy demo also uses ray tracing to render regular shadows, which now most often use cascaded shadow maps when rasterized ( cascaded shadow maps - CSM). The developers note that if the engine fills shadows from directional light sources before rendering the lighting, then it will be very easy to replace the cascaded shadow map shader with code using tracing, which will write the calculated data to the same buffer.


In this case, the difference in quality will clearly be in favor of the trace (shown on the right). A ray-traced image uses 8 rays per pixel without additional filtering, while the CSM technique uses 16 Percentage Closer Filtering (PCM) samples with a special filter applied to the buffer. However, you need to take into account that the developers clearly did not optimize the work of CSM in this case, because it was possible to adjust the resolution of shadow maps and their filtering to get better shadows, but this is just a shadow with the default settings of their engine.

But even taking into account this discount, the difference is obvious - with ray tracing the shadows are much more realistic, they have smooth edges without jagged edges, better blur at the edges, and even small details (chair legs) cast a physically correct shadow. The end result is decent shadows with soft and hard shadow edges exactly where they should be. You can also easily draw shadows from area light sources, which is extremely difficult to do with rasterization.

In terms of performance, this demo renders a single ray for Full HD resolution in less than 4ms, which is slightly faster than global occlusion, even though the rays are longer. Implementing ray tracing into an existing DX12 shadow rendering engine will require several days of programmer work, but the result will be worth it if the performance is sufficient in the end.

It seems that Remedy added almost all the effects possible with tracing to their engine at the initial stage of DXR development. Including reflections rendered using ray tracing. At the same time, there is no such obvious use in the form of purely mirror surfaces, but rather a more subtle approach with reflections on all objects, but less obvious. The following screenshot shows a comparison of techniques using ray tracing (right) and screen space (left):


The traced image was obtained by rendering only one ray of reflections per pixel without filtering. Reflections in screen space are clearly less realistic and take into account only objects visible to the main camera, while ray tracing allows you to show them too, although it has its drawbacks in the form of significant pixel noise. But this can be solved in principle, as other demo programs show, and in the Finnish company’s version, noise reduction is simply not applied yet, with the exception of using pixel values ​​from previous frames for full-screen anti-aliasing.

The Northlight Engine already uses global illumination calculations ( G.I.) - specifically in the game Quantum Break, and this effect is enabled in the engine by default. To calculate the GI, voxels with a size of approximately 25 cm are used, which are combined with the result of the global shading technique SSAO, which uses screen space. As an experiment, Remedy replaced SSAO with a similar effect using ray tracing and the result was better.


It can be seen that the surfaces are not shaded as they should be and something is clearly wrong with them. The problem is solved by modifying the method of using GI volumetric data, with which most of the artifacts are eliminated:


Why do you need to calculate global illumination/shading at all, and is it possible to do without this extremely resource-intensive step? Look at the visual example of what the result of calculating direct lighting alone looks like:


Similar to Doom with its constant darkness and harsh stencil shadows. But in the next screenshot, indirect lighting has also been added to direct lighting - that is, rays of light reflected from other objects in the scene:


It has become much better, despite the noise, the scene has acquired volume and does not look as if all its objects are located in space with the presence of a single bright light source (the sun). And this is what the final image looks like, with color information superimposed, full lighting and post-processing:


The reflections and shading in the scene look very realistic in our opinion. In particular, the lamp reflects all objects, including a bright window invisible to the main camera. And the mug on the right reflects its own pen - this cannot be done using rasterization without clever hacks. The only obvious tracing problem here is a lot of pixel noise, which Remedy hasn't really tried to remove yet. But the same algorithm from Nvidia GameWorks could help a lot, not to mention noise reduction using artificial intelligence.

Of course, it would be very nice to use ray tracing wherever possible, but the optimal solution for hybrid rendering is optimization using shadow maps, which are used in the Remedy demo for most light sources except the sun. And this will initially be done in every application using ray tracing, because using it directly everywhere will be too expensive and it is not yet possible in real time, even using several GPUs at once.

It is important that the integration of DXR and RTX support into the Northlight engine was quite quick and painless. The Finnish developers were amazed at how quickly they were able to prototype improved lighting, shading and reflections using ray tracing - all with much better quality than traditional rasterization hacks. While the technologies shown are currently in early development and far from being included in games right now, it's a great start to future adoption.

Real-Time Ray Tracing in Metro Exodus (4A Games)

It is likely that in the coming years we will see more than one game that uses hybrid rendering with ray tracing to render some of the effects. In particular, the first (or one of the first, at least) should be the game Metro Exodus, which will use DXR ray tracing using Nvidia RTX technology to calculate global illumination and shading.

It is assumed that this calculation method G.I. will be available in the game as an alternative to more familiar algorithms SSAO And IBL(image based lighting, lighting based on the environment texture). Of course, this is still an extremely limited use of tracing, but the quality of global illumination/shading with ray tracing is much higher than even at VXAO, not to mention SSAO. Here is a visual comparison of screen space methods with tracing, taken by our German colleagues from the screen of an exhibition system (so we apologize in advance for the quality):

Textures were turned off during the demonstration so that the difference in scene lighting was clearly visible. And this is true, screen rasterization methods give a flat picture that only vaguely imitates shading in the corners between the edges of objects, and ray tracing gives physically correct global shading and lighting with dark shadows exactly where they should be - for example, look inside the barrel on entering the house through cracks - with SSAO it is not obscured inside at all, but with ray tracing in its depths it is dark, as it should be.

There is only one question here - if the video shows a static scene, without dynamic objects at all and their influence on global illumination, then what prevents you from first calculating everything offline and entering this data into static light maps? It seems to us that when dynamically calculating global illumination in real time, the scene for demonstrating the capabilities should have been chosen somehow more lively, at least with moving light sources, not to mention moving objects. Otherwise, one is left wondering why the players did not understand what exactly was shown to them and why it is impossible to do this using rasterization right now.

conclusions

Ray tracing provides much better image quality compared to rasterization, and has long been used where possible - in the film industry, advertising, design, etc. But for a long time it was simply not suitable for real-time rendering due to its enormous resource intensity - after all, for each pixel it is necessary to calculate several rays reflected from objects in the scene and refracted into them. For offline rendering, which does not require quick results, this approach was always the highest quality, but in real-time graphics we had to settle for rasterization - the simplest and fastest way to project a 3D scene onto a 2D screen. Naturally, high rasterization performance has the disadvantage of only approximate calculations of the color of pixels in the scene, which do not take into account many factors: reflection of light rays, some properties of materials, etc. Rasterization, even with a bunch of cunning hacks, only approximately reproduces the scene, and even the most complex pixel and computational shaders will not provide the quality of full-fledged ray tracing, simply based on the principle of their operation.

The announcement of the DXR API and Nvidia RTX technology has enabled developers to begin researching algorithms that use high-performance ray tracing - perhaps the most significant change in real-time graphics since programmable shaders were introduced. Interested developers have already shown the public some very impressive technology demonstrations using only a small number of samples per pixel during tracing, and the future of gaming is in their hands. And in the hands of GPU manufacturers, who should release new solutions that support hardware tracing, expected in several game projects late this year and early next year.

Naturally, the first attempts to use tracing will be hybrid and seriously limited in the quantity and quality of effects, and full-fledged tracing will have to wait for decades. All demo programs shown use 1-2 rays per pixel, or even less, while professional applications have hundreds of them! And to get the quality of offline renders in real time, you still have to wait a very long time. But now is the time to start working on introducing tracing into existing engines, and whoever is first in mastering the capabilities of DXR may gain a certain advantage in the future. In addition, ray tracing can make the development of virtual worlds easier, since it will eliminate many of the small tasks of manually modifying shadows, lightmaps and reflections, which has to be done with imperfect rasterization algorithms. Already now, hardware-accelerated tracing can be used in the development process itself - to speed up things like preliminary rendering of lightmaps, reflections and static shadow maps.

There are many options for optimizing hybrid rendering, and one of the most impressive features in the examples shown above seems to be the efficiency of noise reduction, which is extremely important when ray tracing with a small number of samples per pixel - this is known to anyone who has ever seen the work of offline tracers, which render the image gradually and at the very beginning it is extremely noisy. An approach with a small number of calculated rays and additional noise reduction makes it possible to obtain an acceptable final quality in a fraction of the time required for full scene tracing. And this despite the fact that the capabilities of artificial intelligence in noise reduction have not yet been used, although this can be done.

The global capabilities of ray tracing should not be judged solely by those released on a quick fix demo programs. They deliberately highlight the main effects, as they are technology demos made for a single purpose. The ray-traced picture becomes much more realistic in general, but users do not always understand where exactly to look, even if they feel that it has become more believable in general. Especially if the difference is not so great at first and the masses are willing to put up with artifacts inherent in algorithms for calculating reflections and global shading in screen space, as well as other rasterization hacks.

But with physically correct global illumination, shading and reflections calculated using ray tracing, the rendered picture becomes more realistic even without the presence of spectacular mirrors and other clearly reflective surfaces. Modern games almost always use physically based rendering, in which materials have roughness and reflectivity properties, as well as cubic maps of the environment, so reflections are always present, even if they are not visible to the naked eye. In such a game, you can quickly replace cube maps of the environment with tracing, offering this opportunity to owners of high-performance systems. Traced shadows also look better and solve the fundamental problems of shadow maps, although some of them are solved in tricky advanced algorithms, like Nvidia Hybrid Frustum Traced Shadows (HFTS), also using tracing in a certain form, but a unified approach would still be best. And rendering very soft shadows from area light sources can produce ideal, ultra-realistic shadows in most cases.

The main difficulty of tracing is that not all first implementations will immediately look noticeably better than tricky screen-space methods, but we can say for sure that this is exactly the direction in which we need to move to achieve photorealism. Because screen space algorithms have fundamental limitations that cannot be jumped over. In many respects, the picture of even existing demo programs is quite good, even if it is rendered by several powerful GPUs and uses tricky noise reduction. For now we have to use a small number of rays per pixel and suppress noise, but in the future this will be solved with the help of primitive extensive development. These are just the very first tests with real-time ray tracing; in the future, the image quality will increase along with performance.

For now, in the next couple of years we will be able to include one or two new techniques that use ray tracing to complement rasterization or to replace only part of its work. This is always done at the beginning of the life of new technologies, when it is possible to disable new algorithms that are too heavy for the average gaming PC. But if you focus only on them, then there will simply be no progress. And Nvidia’s support for hardware tracing is important because they know how to help developers implement new technologies. And we are sure that Metro Exodus is far from the only game in which Nvidia is promoting tracing, because they are collaborating with game developers on several projects at once. Notorious Tim Sweeney from Epic Games predicted that in two years GPUs will gain sufficient performance for the widespread use of ray tracing in games, and you can believe it.

Developers closest to Microsoft began exploring the capabilities of DXR almost a year ago, and this is just the beginning of the new API. Moreover, there are simply no available graphics solutions on the market that support hardware tracing acceleration. The DXR announcement is intended to get hardware and software developers to begin work on understanding and optimizing ray tracing and begin the early stage of introducing new technologies into games. Interested developers have already begun experimenting with DXR and modern GPUs, and companies such as Epic Games, Futuremark, DICE, Unity and Electronic Arts have even announced plans to use DXR capabilities in future versions of game engines and games.

It's likely that enthusiasts will have to wait (that's our lot) for hardware-accelerated GPUs to become available to see even the very first ray-traced effects, as the basic level of support through compute shaders may be too slow for even simpler algorithms. Games that make meaningful use of DXR will require hardware support for tracing, which will initially only be available on Nvidia Volta, but threatens to be actively improved over time. It is also possible that relatively simple games with stylized graphics will emerge that will use ray tracing exclusively.

Another important point is that the current generation of game consoles does not support hardware acceleration of ray tracing, Microsoft has not said anything about DXR in Xbox One. Most likely, such support simply will not exist, which could become another obstacle to the active use of ray tracing capabilities in games. Although the Xbox One has almost full support for DirectX 12, it does not have any hardware units to accelerate tracing, so there is a good chance that, at least until the next generation of consoles, it will be limited to a couple of ray tracing effects in a few game projects supported by Nvidia , promoting its RTX technology. I would really like to be wrong, because computer graphics enthusiasts have already been waiting for such global improvements in real-time rendering.

At Gamescom 2018 Nvidia announced the series Nvidia video cards GeForce RTX, which will support Nvidia RTX real-time ray tracing technology. Our editors figured out how this technology will work and why it is needed.

What is Nvidia RTX?

Nvidia RTX is a platform containing a number of useful tools for developers that provide access to a new level of computer graphics. Nvidia RTX is only available for the new generation of Nvidia GeForce RTX graphics cards, built on the Turing architecture. The main feature of the platform is the availability real-time ray tracing(also called ray tracing).

What is ray tracing?

Ray tracing is a feature that allows you to simulate the behavior of light, creating believable lighting. Now in games the rays do not move in real time, which is why the picture, although it often looks beautiful, is still not realistic enough - the technologies used now would require a huge amount of resources for ray tracing.

This is corrected by the new series of Nvidia GeForce RTX video cards, which have enough power to calculate the path of rays.

How it works?

RTX projects rays of light from the player's (camera) point of view onto the surrounding space and thus calculates where the color pixel should appear. When the rays hit something, they can:

  • Reflect - this will provoke the appearance of a reflection on the surface;
  • Stop - this will create a shadow on the side of the object that the light did not hit
  • Refract - this will change the direction of the beam or affect the color.
The presence of these functions allows you to create more believable lighting and realistic graphics. This process is very resource-intensive and has long been used in creating film effects. The only difference is that when rendering a film frame, the authors have access to a large amount of resources and, one might say, an unlimited period of time. In games, the device only has a fraction of a second to generate images, and most often, one video card is used, and not several, as when processing movies.

This has prompted Nvidia to introduce additional cores into GeForce RTX graphics cards, which will take on most of the load, improving performance. They are also equipped with artificial intelligence, whose task is to calculate possible errors during the tracing process, which will help to avoid them in advance. This, as the developers say, will also increase the speed of operation.

And how does ray tracing affect quality?

During the presentation of video cards, Nvidia demonstrated a number of examples of ray tracing: in particular, it became known that some upcoming games, including Shadow of the Tomb Raider and Battlefield 5, will run on the RTX platform. This function, however, will be optional in the game, since tracing requires one of the new video cards. The trailers shown by the company during the presentation can be viewed below:

Shadow of the Tomb Raider, which will be released on September 14 this year:

Battlefield 5, which will be released on October 19:

Metro Exodus, scheduled for release on February 19, 2019:

Control, the release date of which is still unknown:

Along with all this, Nvidia, what other games will receive the ray tracing feature.

How to enable RTX?

Due to the technical features of this technology, only video cards with Turing architecture will support ray tracing - currently available devices cannot cope with the amount of work that tracing requires. At the moment, the only video cards with this architecture are the Nvidia GeForce RTX series, models of which are available for pre-order from 48,000 to 96,000 rubles.

Does AMD have analogues?

AMD has its own version of real-time ray tracing technology, which is present in their Radeon ProRender engine. The company announced its development back at GDC 2018, which took place in March. The main difference between AMD's method and Nvidia's is that AMD gives access not only to tracing, but also to rasterization, a technology that is now used in all games. This allows you to both use tracing, getting better lighting, and save resources in places where tracing would be an unnecessary load on the video card.

The technology that will run on the Vulkan API is still in development.

As Nvidia stated during its presentation, mastering RTX technology will significantly improve the graphics component of games, expanding the set of tools available to developers. However, it is too early to talk about a general graphics revolution - not all games will support this technology, and the cost of video cards that support it is quite high. The presentation of new video cards means that progress in graphic details there is, and over time it will grow and grow.

INTRODUCTION

There are several methods for generating realistic images, such as forward ray tracing (photon tracing) and reverse ray tracing.

Ray tracing methods are considered the most powerful and versatile methods for creating realistic images today. There are many examples of the implementation of tracing algorithms for high-quality display of the most complex three-dimensional scenes. It can be noted that the universality of tracing methods is largely due to the fact that they are based on simple and clear concepts that reflect our experience of perceiving the world around us.

The objects around us have the following properties in relation to light:

radiate;

reflect and absorb;

pass through themselves.

Each of these properties can be described by a certain set of characteristics.

Radiation can be characterized by intensity and spectrum.

The property of reflection (absorption) can be described by the characteristics of diffuse scattering and specular reflection. Transparency can be described by intensity attenuation and refraction.

Rays of light emanate from points on the surface (volume) of emitting objects. You can call such rays primary - they illuminate everything else. Countless primary rays emanate from radiation sources in various directions. Some rays go into free space, and some hit other objects.

As a result of the action of primary rays on objects, secondary rays arise. Some of them end up on other objects. Thus, being reflected and refracted many times, individual light rays arrive at the observation point. Thus, the image of the scene is formed by a certain number of light rays.

The color of individual image points is determined by the spectrum and intensity of the primary rays of the radiation sources, as well as the absorption of light energy in objects encountered on the path of the corresponding rays.

Direct implementation of this ray imaging model seems difficult. You can try to construct an algorithm for constructing an image using the indicated method. In such an algorithm, it is necessary to provide an enumeration of all primary rays and determine those that hit the objects and the camera. Then iterate over all secondary rays, and also take into account only those that hit objects and the camera. And so on. This algorithm is called direct ray tracing. The main disadvantage of this method is a lot of unnecessary operations associated with the calculation of rays, which are then not used.

1. REVERSE RAY TRACING

This work is devoted to this method of generating realistic images.

The reverse ray tracing method can significantly reduce the search for light rays. The method was developed in the 80s; the works of Whitted and Kay are considered fundamental. According to this method, rays are tracked not from light sources, but in the opposite direction - from the observation point. This way, only those rays that contribute to the formation of the image are taken into account.

The projection plane is divided into many pixels. Let us choose a central projection with a vanishing center at a certain distance from the projection plane. Let's draw a straight line from the vanishing center through the middle of the pixel of the projection plane. This will be the primary back-trace ray. If this ray hits one or more objects in the scene, then select the closest intersection point. To determine the color of an image pixel, you need to take into account the properties of the object, as well as what light radiation falls on the corresponding point of the object.

If the object is mirrored (at least partially), then we construct a secondary ray - the incident ray, considering the previous, primary traced ray to be the reflection ray.

For an ideal mirror, it is then sufficient to trace only the next point of intersection of the secondary ray with some object. An ideal mirror has a perfectly smooth polished surface, so one reflected ray corresponds to only one incident ray. The mirror can be darkened, that is, absorb part of the light energy, but the rule still remains: one ray is incident, one is reflected.

If the object is transparent, then it is necessary to construct a new ray, one that, when refracted, would produce the previous traced ray.

For diffuse reflection, the intensity of the reflected light is known to be proportional to the cosine of the angle between the ray vector from the light source and the normal.

When it turns out that the current backtracing ray does not intersect any object, but goes into free space, then the tracing for this ray ends.

In the practical implementation of the backtracing method, restrictions are introduced. Some of them are necessary to be able to solve the problem of image synthesis in principle, and some restrictions can significantly improve the performance of tracing.

Limitations when implementing tracing

Among all types of objects, we will highlight some, which we will call light sources. Light sources can only emit light, but cannot reflect or refract it. We will consider only point light sources.

The properties of reflective surfaces are described by the sum of two components - diffuse and specular.

In turn, specularity is also described by two components. The first (reflection) takes into account reflection from other objects that are not light sources. Only one mirrored ray r is constructed for further tracing. The second component (specular) means glare from light sources. To do this, rays are directed to all light sources and the angles formed by these rays with the specularly reflected back-tracing ray (r) are determined. In specular reflection, the color of a point on a surface is determined by the intrinsic color of what is being reflected.

With diffuse reflection, only rays from light sources are taken into account. Rays from specularly reflective surfaces are IGNORED. If the beam directed at a given light source is blocked by another object, then this point of the object is in the shadow. With diffuse reflection, the color of an illuminated point on a surface is determined by the surface's own color and the color of the light sources.

For transparent objects, the dependence of the refractive index on wavelength is not taken into account. (Sometimes transparency is modeled without refraction at all, that is, the direction of the refracted ray t coincides with the direction of the incident ray.)

To take into account the illumination of objects by light scattered by other objects, a background component (ambient) is introduced.

To complete the tracing, a limit on the number of iterations (recursion depth) is introduced.

Conclusions from the traceback method

Advantages:

The versatility of the method, its applicability for the synthesis of images of rather complex spatial schemes. Embodies many laws of geometric optics. Various projections are simply implemented.

Even truncated versions of this method allow one to obtain fairly realistic images. For example, if we limit ourselves to only the primary rays (from the projection point), then this results in the removal of invisible points. Tracing just one or two secondary rays gives shadows and specular transparency.

All coordinate transformations are linear, so it's quite easy to work with textures.

Flaws:

Problems with modeling diffuse reflection and refraction.

For each point in the image, many computational operations must be performed. Tracing is one of the slowest image synthesis algorithms.

2. DESIGN PART

Algorithms.

Backward ray tracing.

Rice. 1 - Block diagram of the recurrent inverse ray tracing algorithm

ray tracing programming language

In this program, the backtracing algorithm is implemented in a recurrent manner. The primary ray intensity calculation function calls itself recurrently to find the reflected and refracted ray intensities.

Algorithm:

To calculate the color of each pixel in the frame buffer, do the following:

Find pixel coordinates in the world coordinate system.

Find the coordinates of the primary ray.

Start the primary beam intensity calculation function.

Find intersections of the ray with all scene primitives and select the closest one.

If no intersection is found, then the beam has gone into free space.

To calculate color, we take the total intensity to be equal to the background intensity. Go to step 12. If an intersection is found, go to step 6.

We calculate the “local” color intensity of the object to which the intersection point belongs. By “local” intensity we mean the intensity taking into account the intensity of diffusely reflected light and the intensity of glare.

If the material reflects light only diffusely, then we consider the intensities of reflected and refracted light to be zero. Go to step 12. Otherwise, go to step 8.

If the maximum recursion depth is reached, then take the intensities of reflected and refracted light to be zero. Go to step 12. Otherwise, go to step 9.

Calculate the vector of the reflected ray. Running a recursion to find the intensity of the reflected beam.

Calculate the vector of the refracted ray. Running a recursion to find the intensity of the refracted ray.

Calculation of total color intensity. Total intensity includes scattered light intensity, local intensity, and reflected and refracted ray intensities.

Return to the point where the ray intensity calculation function was called.

If the primary ray was being calculated, then a pixel of the calculated color is placed in the frame buffer. Let's move on to calculating the next pixel of the frame buffer. If the reflected (refracted) ray was being calculated, then the calculated intensity will be accepted as the intensity of the reflected (refracted) ray at the previous recursion step.

Construction of shadows.

Solid shadows.

To construct solid shadows in the tracing algorithm, at the stage of calculating the “local” color intensity at an object point, the “visibility” of each light source from this point is checked.

The principle of operation of the algorithm.

A beam is constructed from the point being checked, directed at the light source.

A search is made for intersections of this ray with scene primitives between the point being checked and the source.

If at least one intersection is found, then the point being checked is in the shadow. When calculating its color, the source for which the test was carried out is not taken into account.

verifiable source.

This method of finding shadows gives an acceptable result as long as there are no transparent objects in the scene. However, the solid black shadow of the glass does not look realistic. Glass transmits some light, so the intensity of the obscured source must be taken into account when calculating the light intensity at a point on an object, but it must be attenuated as light passes through the glass.

Mathematical and physical background of the inverse ray tracing algorithm.

Lighting.

The light intensity is the sum of the intensity of the background illumination of the scene, the intensity of the diffusely reflected light from the sources, the intensity of glare from the sources (“local” illumination characteristics), the intensity of the specularly reflected beam and the intensity of the refracted beam, if any.

The backlight intensity (IA) is set by some constant.

The intensity of diffusely reflected light (ID) is calculated using the classic “cosine law”.

ID = IL cos α,(2.2.1.6)

where IL is the intensity of the light source, α is the angle between the normal to the surface and the direction towards the source.

If the scene is illuminated by several sources, Id is calculated for each of them and then summed.

IDi =Σ ILi cos αi.(2.2.1.7)

The source flare intensity (IS) is calculated according to the Phong model.

IS = IL cosp β,(2.2.1.8)

where IL is the intensity of the light source (0<=IL<=1), β - угол между отраженным лучом от источника света и направлением на точку, в которой расположена камера (центр проекции), p - некоторая степень от 1 до 200 -влияет на размытость блика. При

At small p values ​​the highlight is more blurry.

As with calculating ID, when a scene is illuminated by multiple sources, IS is calculated separately for each source, and then the results are summed.

ISi =Σ ILi cosp β i.(2.2.1.9)

The specular reflected (IR) and refracted (IT) light intensities are calculated for the reflected and refracted rays in the next recursion step. If the recursion depth limit is reached, then these intensities are taken to be zero. From the intensity of IR, r percent is taken, and from IT - t = 1 - r (see the previous section).

In addition, the following coefficients are introduced: KD - diffuse surface reflection coefficient, KS - glare coefficient. - this coefficient is a characteristic of the roughness of the reflecting surface. The more uneven the surface, the less light is reflected from it specularly and the less light it transmits, and accordingly, the more light it reflects diffusely. 0<= KD <= 1.

At KD = 0, all light falling on the surface is reflected and refracted. KD = 1 - all light is reflected diffusely. The intensity of the diffusely reflected light and the intensity of the background illumination are multiplied by this factor. The intensities of specularly reflected and refracted light are multiplied by (1 - KD). - this coefficient is responsible for the brightness of the glare from the source. 0<=KS<=1.

When KS = 0 - the glare is not visible, when KS = 1 - the brightness of the glare is maximum.

Thus, the final formula for calculating the intensity of an object at any point will be as follows:

I = IAKD + ​​Σ(ILiKDcos αi + ILiKScosp β i) + (1 - KD)(IRr + IT(1 - r)).(2.2.1.10)

It should be noted that the final intensity should not be greater than one. If this happens, then this point in the image will be overexposed. Its intensity must be reset to one.

To obtain a color image, it is necessary to carry out calculations separately for the red, green and blue components of light. The color of an image pixel will be calculated by multiplying each intensity component by a number that determines the maximum number of intensity gradations in the image. For a 32-bit image it is equal to 255 for each color (R, G, B).

255*IR,= 255*IG,= 255*IB.

Here IR (not to be confused with the intensity of specularly reflected light), IG, IB are the intensities of the three components of light at a point, obtained using the formula indicated above.

Coefficients KD, KS, p are individual characteristics of an object, reflecting its properties. In addition, there is one more coefficient - the absolute refractive index n. n = c / v, where c is the speed of light in a vacuum, v is the speed of light in the medium (inside the object). For absolutely opaque bodies, this coefficient is equal to ∞ (since the speed of light inside the body is zero). In the program to specify a completely opaque body, you need to set this coefficient >> 1 (about 10,000). In this case, the fraction of specularly reflected light r will tend to unity, and that of refracted light, respectively, to zero.

Calculation of normals.

In the tracing algorithm, normals to objects are needed to calculate reflected and refracted rays, as well as to determine illumination according to Phong's model.

This program contains three types of primitives from which the scene is built. These are polygon (triangle), ellipsoid and paraboloid. The last two were introduced for a more realistic simulation of a glass (it could have been built from polygons, but the model would have been rougher).

Calculation of the normal to a polygon (triangle).

Calculating the normal to a triangle reduces to a vector multiplication operation. Let triangle ABC be given by the coordinates of its three vertices:

XA, YA, ZA, XB, YB, ZB, XC, YC, ZC.

Let's calculate the coordinates of two vectors, for example AB and AC:

XB - XA,= XB - XA,

ZAB = XB - XA,(2.2.2.1)= XC - XA,= XC - XA,= XC - XA.

The coordinates of the normal vector will be calculated using the formulas:

YABZAC - YACZAB,= XABZAC - XACZAB,(2.2.2.2)= XABYAC - XACYAB.

There is no need to calculate the coordinates of the normal vector to the triangle each time in the tracing body, since the normals are the same at any point of the triangle. It is enough to count them once in the initializing part of the program and save them. When you rotate a triangle, you must also rotate its normal.

Calculation of the normal to a second order surface.

A second-order surface is given in the general case by an equation of the form:

Q(x,y,z) = a1x2 + a2y2 + a3z2 + b1yz + b2xz + b3xy + c1x +c2y +c3z + d =0.

But we will use a different form of recording. So the equation of the ellipsoid will look like this:

(x-x0)2/A2 + (y-y0)2/B2 + (z-z0)2 /C2 = 1,(2.2.2.3)

where x0, y0, z0 are the coordinates of the center of the ellipsoid, A, B, C are the lengths of the semi-axes of the ellipsoid.

Paraboloid equation:

(x-x0)2/A2 + (y-y0)2/B2 - (z-z0)2 /C2 = 1,(2.2.2.4)

where x0, y0, z0 are the coordinates of the center of the paraboloid, A, B, C are the lengths of the semi-axes of the paraboloid. The axis of the paraboloid is located along the Oz axis of the world coordinate system. To calculate the coordinates of the normal vector, it is necessary to calculate the partial derivatives with respect to x, y, z.

Coordinates of the ellipsoid normal vector:

Yn = 2(y-y0)/B2,= 2(z-z0)/C2.

The direction of the vector will not change if all its coordinates are divided by 2:

Xn = (x-x0)/A2,= (y-y0)/B2,(2.2.2.5)

Zn = (z-z0)/C2.

The coordinates of the paraboloid normal vector are calculated similarly:

Xn = (x-x0)/A2,= (y-y0)/B2,(2.2.2.6)

Zn = - (z-z0)/C2.

The normal for a second-order surface will have to be calculated directly in the tracing body, since the normals are different at different points of the figure.

Calculation of the reflected ray.

Let the vector of the incident ray S be given, and the normal vector N be known. We need to find the vector of the reflected ray R.

Let's consider the unit vectors R1, S1 and N1. Since the vectors of the normal, the incident ray and the reflected ray are in the same plane, we can write R1 + S1 = N`, where N` is the vector corresponding to the diagonal of the rhombus and coinciding in direction with the normal. The length of the vector N` is 2cosθ. Since the vector N` coincides in direction with N1, then

N` = N`2cosθ.

From here we find the unit vector of the reflected ray:

R1 = N1 2cosθ - S1 = N/|N| 2cosθ - S/|S|.

Let's find cosθ. This can be done using the scalar product of the vectors N and S:


Assuming that the desired vector of the reflected ray will have the same length as the vector of the incident ray, that is, R = |S| R1, we get

N 2NS/|N|2 - S.

This is a solution in vector form. Let's write down the coordinates of the vector:

2xN(xNxS+yNyS+zNzS)/(xN2+yN2+zN2) - xS,= 2yN(xNxS+yNyS+zNzS)/(xN2+yN2+zN2) - yS,(2.2.3.1)= 2zN(xNxS+yNyS +zNzS)/(xN2+yN2+zN2) - zS.

Calculation of a refracted ray.

Let two unit vectors be given: S1 is the vector of the incident ray, and N1 is the normal vector to the interface between the two media. Also, two refractive indices for these media must be known - n1 and n2 (or their ratio).

It is required to find the unit vector of the refracted ray T1. To solve this, let's perform some geometric constructions.

The required vector T1 is equal to the sum of two vectors:

Let's first find the vector NT. It is opposite in direction to the normal vector, and its length is |T1| cos α2 = cos α2 (since T1 is unit). Thus, NT = -N1 cos α2. It is necessary to determine cos α2. Let us write the law of refraction n1 sin α1 = n2 sin α2 in the form:

sin α2 = n sin α1,

where n = n1 / n2.

Let us use the identity cos2α + sin2α = 1. Then

cos α2 = √ 1 - sin2α2 = √ 1 - n2 sin2α1

cos α2 = √ (1 + n2 (cos2α1 - 1)

The value of cos α1 can be expressed through the scalar product of the unit vectors S1 and N1, that is, cos α1 = S1N1. Then we can write the following expression for the vector NT:

N1√1+n2((S1N1)2 - 1).

It remains to find an expression for vector B. It is located on the same line as vector A, and A = S1 - NS. Considering that NS is equal to N1 cos α1, then A = S1 - N1 cos α1. Since cos α1 = S1N1, then A = S1 - N1 (S1N1).

Since the length of vector A is equal to sin α1, and the length of vector B is equal to sin α2, then

|B|/|A| = sin α2/ sin α1 = n2/n1 = n,

where |B| = n |A|. Taking into account the relative position of vectors A and B, we obtain

NA =n(N1(S1N1) - S1).

Now we can write down the desired expression for the unit vector of the refractive ray T1:

T1 = nN1 (S1N1) - nS1 - N1√1 + n2 ((S1N1)2 - 1).(2.2.4.1)

Calculation of the intersection point with primitives.

In the tracing algorithm, to construct an image, it is necessary to calculate the points of intersection of rays with scene primitives. The ray is given by the parametric equation of a straight line. Any point on the ray satisfies the equation

R = A + Vt,(2.2.5.1)

where R is the radius vector of an arbitrary point belonging to the ray, A is the radius vector of the starting point of the ray, V is the direction vector of the ray, t is a parameter.

If the direction vector V is normalized, then the parameter t will be numerically equal to the distance from the starting point of the ray A to the point R.

You can write this equation in coordinate form:

x = x1 + at,= y1 + bt,(2.2.5.2)= z1 + ct.

Here x1, y1, z1 are the coordinates of the starting point of the ray in the rectangular Cartesian world coordinate system, a, b, c are the coordinates of the guiding vector of the ray.

Calculation of the point of intersection of a ray with a second-order surface.

To find the point of intersection of the ray given by equations (2) with the second-order surface given by equations (2.2.2.3) or (2.2.2.4):

(x-x0)2/A2 + (y-y0)2/B2 + (z-z0)2 /C2 = 1 (ellipsoid)

(x-x0)2/A2 + (y-y0)2/B2 - (z-z0)2 /C2 = 1 (paraboloid),

you need to substitute the corresponding ray equations into the second-order surface equation instead of x, y and z. As a result of this, after opening all the brackets and bringing similar ones, we obtain a quadratic equation with respect to the parameter t. If the discriminant of a quadratic equation is less than zero, then the ray and the second-order surface have no common intersection points. Otherwise, it will be possible to calculate two values ​​for the parameter t. The discriminant can be equal to zero - this corresponds to the limiting case of the ray touching the surface, and we will get two coinciding values ​​of the parameter t.

To find the coordinates of the points of intersection of the ray and the surface, it is enough to substitute the found values ​​of the parameter t into the ray equations (2).

In the program, when two intersections are found, the closest one is selected for visualization. The closest intersection is determined by comparing the found parameters t. Closer to the observation point is the intersection that corresponds to the smaller parameter t. Here it should be noted that as a result of solving a quadratic equation, one or both values ​​of the parameter t may turn out to be negative. This means that the point of intersection lies “behind” relative to the point of origin of the ray, on half of the straight line located “on our side” relative to the picture plane. Such points are discarded when searching for an intersection.

In addition, the program includes upper and lower cutting planes for each figure. Only the part of the figure lying between them is displayed.

To do this, after finding the intersection point, its z-coordinate is analyzed.

Calculation of the point of intersection of a ray with a polygon (triangle).

To calculate the point of intersection of the ray given by equations (2), it is necessary to first determine the point of intersection of this ray with the plane containing this triangle.

The plane equation looks like this:

Q(x, y, z) = Ax + By + Cz +D = 0.(2.2.5.3)

Here the coefficients A, B, C coincide with the coordinates of the normal to this plane. The normal coordinates of the plane coincide with the normal coordinates of the triangle, which we calculated at the stage of loading the scene.

To find the free term D, it is necessary to substitute the coordinates of any point of the triangle, for example, one of the vertices.

Ax -By - Cz.(2.2.5.4)

During program execution, the value of D will not change, so it is advisable to calculate it when initializing the scene and store it, like the coordinates of the normal. It is necessary to recalculate only when the position of the triangle changes.

Now, to find the intersection point, we substitute the ray equations (2) into

plane equation.

(x1 + at) + B (y1 + bt) + C (z1 + ct) + D = 0

Where do we get it from?

= - (Ax1 + By1 + Cz1 + D) / (Aa + Bb + Cc)(2.2.5.5)

If the denominator of this fraction is zero, then the ray is parallel to the plane in which the triangle lies. There is no intersection point.

To find the coordinates of the intersection point, you need to substitute the found value of the parameter t into the ray equations (2). Let's call the intersection point D. We get the coordinates xD, yD, zD.

Now you need to determine whether point D is inside the triangle. Let's find the coordinates of the vectors AB, BC, CA (A, B, C are the vertices of the triangle) and the coordinates of the vectors AD, BD, CD. Then we find three vector products:

nA = AB x AD,= BC x BD,(2.2.5.6)= CA x CD.

These vectors will be collinear. If all three vectors are co-directional, then point D lies inside the triangle. Codirectionality is determined by the equality of the signs of the corresponding coordinates of all three vectors.

The operation of checking whether point D belongs to triangle ABC can be speeded up. If we orthogonally project triangle ABC and point D onto one of the planes xOy, yOz or xOz, then the point’s projection falling into the triangle’s projection will mean that the point itself is falling into the triangle (of course, if it is already known that point D lies in the plane containing triangle ABC ). At the same time, the number of operations is noticeably reduced. So, to search for the coordinates of all vectors, you need to search for two coordinates for each vector, and when searching for vector products, you need to search for only one coordinate (the rest are equal to zero).

To check the co-directionality of the vectors obtained when calculating the vector product, you need to check the signs of this single coordinate for all three vectors. If all signs are greater than zero or less than zero, then the vectors are co-directional. If one of the vector products is zero, it corresponds to the case when point D falls on the line containing one of the sides of the triangle.

In addition, a simple dimensional test can be performed before calculating vectors and cross products. If the projection of point D lies to the right, to the left, above or below each of the projections of the vertices of the triangle, then it cannot lie inside.

It remains to add that for projection it is better to choose the plane on which the triangle’s projection area is larger. Under this condition, the case of projecting a triangle into a segment is excluded (provided that the triangle being tested is not degenerate into a segment). In addition, as the projection area increases, the probability of error decreases. To determine such a projection plane, it is enough to check the three coordinates of the triangle normal. If the z-coordinate of the normal is greater (in absolute value) x and y, then it must be projected onto the xOy plane. If y is greater than x and z, then we project onto xOz. In the remaining case - on yOz.

Description of data types. Program structure.

Description of program modules

List of modules:.h-description of the TTex structure.h-description of the TPlaneTex and TEllipsoidTex structures.h-description of the TPoint2d and TPoint3d structures.h-description of the TRGBColor structure.h-description of the TLamp ​​class.h-description of the TCam class.h-description of the TPrimitive class. h-class description TFrstSurface.h-class description TScndSurface.h-class description TTriangle.h-class description TEllipsoid.h-class description TCylinder.h-class description THyperboloidVert.h-class description THyperboloidHor.h-class description TScene.h- description of the TTracer class

Implementing modules, program interface:

Options.h-module of the “Options” form

ExtraCamOptions.h-module of the “Camera Properties” form

MainUnit.h-module of the main form of the program

A brief description of the structures and classes of the program: TPoint3d - a structure that describes a point in the world coordinate system, TPoint2d - a structure that describes a point on a plane (in a texture) with integer coordinates, TRGBColor - a structure that describes a color in three components (RGB), TTex - a structure , describing the texture - contains the address of the pixel array and its dimensions, TPlaneTex - a structure describing the binding of the texture to the plane.

Contains three points to which the texture is attached: TLamp ​​- a class that describes the light source.

Contains a TPoint3d coord object with source coordinates and three float variables (Ir, Ig, Ib) to store the intensity of three light components. TCam is a class that describes the camera.

Contains two angles (a, b) indicating the camera's viewing direction, the point at which the camera is pointing (viewP), and the distance from the camera to that point (r). TPrimitive is an abstract primitive class. Surfaces of the first and second order are inherited from it. TFrstSurface is an abstract class of a first-order surface. The triangle class is inherited from it.TScndSurface is an abstract second-order surface class. The ellipsoid and paraboloid classes are inherited from it. TTriangle is a triangle class. Contains three vertices of a triangle and its normal. TCylinder - class of a cylinder. THyperboloidVert - class of a single-sheet hyperboloid lying along the oZ axis. THyperboloidHor - class of a single-sheet hyperboloid lying along the oX axis. TEllipsoid - class of an ellipsoid. TScene - class of a scene. Contains information about all primitives, sources and camera. TTracer is a class responsible for constructing an image. Contains a buffer with a size of 400x400 pixels, in which the scene image is formed. Before generation, you need to call the function, passing it as a parameter a pointer to the scene that needs to be generated. To generate, call the render function.

All TPrimitive descendant classes provide the following functions: getT(TPoint3d p0, TPoint3d viewDir) - returns the distance from the start point (p0) of the viewDir ray to the nearest intersection point with the primitive.

void getTArr(float* arr, int& n, TPoint3d p0, TPoint3d viewDir) - fills the arr array with distances from the start point (p0) of the viewDir ray to the nearest all intersection points with the primitive.

void getNormal(TPoint3d& n, const TPoint3d& p) - returns the coordinates of the normal vector to the primitive at point p.

void getColor(TRGBColor& c, const TPoint3d& p) - returns the color of the primitive at point p (taking into account the texture).

3. TECHNOLOGICAL PART

Choosing a programming language.

When developing the program, the high-level programming language C++ was used as part of the visual programming environment CodeGear RAD Studio for Windows.

This language was chosen due to the fact that it provides the most convenient means for working with RAM and allows you to implement algorithms more efficiently compared to other high-level languages. Programs written in C++ run faster and take up less disk space.

Additionally, the visual programming environment CodeGear RAD Studio for Windows

provides a large number of standard visual components for creating an interface, and a number of libraries with various commonly used useful functions. Also, the author of the work has the greatest programming experience in this visual programming environment.

Options form. Lighting tab.

This tab contains tools for setting up scene lighting.

Source coordinates - coordinates in the world coordinate system of the light source selected in the drop-down list.

Source intensity - values ​​of the three intensity components of the light source selected in the drop-down list.

Background intensity - values ​​of the three components of background intensity.

Button “+” (next to the drop-down list) - adding a new light source.

Button “-” (next to the drop-down list) - deletes the light source selected in the drop-down list.

Options form. Camera tab.

This tab contains tools for configuring camera options.

Preview - here you can see an approximate appearance of the image before it is generated.

Navigation - camera position settings.

Additionally - when you click this button, a form appears

Camera properties with additional camera options.

Camera properties form.

Radius - the distance from the camera to the point at which it is aimed.

Radius change step - increment of the camera radius by pressing the “-” button once on the “Camera” tab of the “Options” form (or decreasing by pressing the “+” button once).

Options form. "materials" tab.

This menu displays the parameters of the material of the table on which the stage stands.

Color - the color of the table material.

Coef. diffuse reflection - coefficient Kd of the table material (see section 2.2.1).

Texture - if the checkbox is checked, a texture will be displayed on the table

Select texture - select an image file (*.bmp) that will be used as a table texture.

Advanced - when you click this button, the Table Properties form appears with additional parameters for the table material.

Table properties form.

Glare coefficient is the KS coefficient of the table material (see section 2.2.1).

The blur of the highlight is the exponent p of the table material.

Texture repetitions - how many times the table texture will be repeated along the OX and OY axes.

Options form. System tab.

On this tab you can configure the algorithms implemented in the program.

Recursion depth - this parameter sets the recursion depth in the tracing algorithm. With larger values ​​of this parameter, the quality of the generated image improves.

ATTENTION!

The recursion depth STRONGLY affects the speed of image generation. It is not recommended to set this parameter to values ​​greater than 10.

Antialiasing - enable the image smoothing algorithm.

Shadow type - select the shadow generation algorithm.

4. RESEARCH PART

The studies were carried out on a computer with the following configuration:

CPU - Intel Core 2 Duo T5850- 2048Mb DDR2 - Nvidia GForce 9300M 256Mb- Windows 7

4.1 Dependence of generation time on recursion depth

This test examined the dependence of image generation time on recursion depth. The studies were carried out for a scene illuminated by one light source. - generation time without a shadow in seconds. - generation time with a solid shadow in seconds. - recursion depth.


4.2 Dependence of generation time on the number of sources


4.3 Analysis of research results

From the first study it is clear that the generation time increases greatly with the number of recursion levels. This fits well with the theory, because the number of rays increases with increasing recursion depth.

It should be noted that for scenes with a small number of polygons there is no need to set large values ​​for the maximum recursion depth, because the difference in the quality of the generated image will be insignificant.

The second study showed that the dependence of generation time on the number of light sources is linear. From the obtained values, you can calculate the time required to calculate one source. On the machine on which the research was carried out, with a recursion depth of 5, this time is approximately 0.5 seconds.

CONCLUSION

This program demonstrated the results of an algorithm for generating realistic images - reverse ray tracing.

This implementation demonstrates the algorithm’s ability to construct images close to photorealistic. Tracing is one of the most advanced algorithms for generating realistic images. The resulting image quality is incomparably better than the image quality obtained using algorithms such as the Z-buffer. However, the requirements for computing power required to generate one image frame are much higher than in the same Z-buffer. Today, the real-time reverse ray tracing algorithm is used only for research purposes on super-powerful computers that are inaccessible to the average user. Of course, there are enthusiasts who create 3D games and other real-time graphics applications that are based on a reverse ray tracing algorithm, but as a rule they have extremely low FPS, or the basis of all objects in the scene is a sphere - the easiest to trace rays surface. But in order for this algorithm to become profitable for use in mass projects, such as 3D games, a noticeable breakthrough in the field of desktop computer hardware is necessary.

Even using the example of computer games, one can easily see the redundancy of the reverse ray tracing algorithm. After all, the player, being captivated by the gameplay, is unlikely to admire the geometrically correct rendering of shadows and reflections of game objects. In this regard, approximate drawing using polygons is a significant advantage today, because it does not require a powerful computer, and the results are close to reality.

It is also believed that the ray tracing algorithm is ideal for images of artificial objects with geometrically simple shapes, for example, cars, airplanes, buildings, etc. Generating objects such as a human face, animal fur or forest is an extremely difficult task for the algorithm, which increases So there are considerable requirements for the computer hardware.

However, today you can already see research on the implementation of a reverse ray tracing algorithm in real time. As a rule, in such projects a car is used as a stage. But absolute photorealism of the image has already been achieved, and besides, it takes very little time to generate a single frame. Of course, these projects were implemented on super-powerful computers, but the day is not far off when such 3D applications will become available to the average user.

BIBLIOGRAPHY

1. Rogers D. Algorithmic foundations of machine graphics: trans. from English - M.: Mir, 1989. - 512 p.

Porev V. N. Computer graphics. - St. Petersburg: BHV-Petersburg, 2002. - 432 p.

Nikulin E.A. Computer geometry and computer graphics algorithms. St. Petersburg: BHV-Petersburg, 2003. - 560 p.

Angel E. Interactive computer graphics. - “Williams”, 2001. - 592 p.: ill. - Paral. Titus From English

Avdeeva S.M., Kurov A.V. Algorithms for three-dimensional computer graphics: Textbook. - M.: Publishing house of MSTU im. N.E. Bauman, 1996. - 60 p.


Ray tracing techniques are considered to be the most powerful methods for creating realistic images today. The versatility of tracing methods is largely due to the fact that they are based on simple and clear concepts that reflect our experience of perceiving the world around us.

Let's look at how an image is formed. The image is produced by light entering the camera. Let's release many rays from the light sources. Let's call them primary rays. Some of these rays will fly away into free space, and some will hit objects. The rays can be refracted and reflected on them. In this case, part of the beam energy will be absorbed. Refracted and reflected rays form many secondary rays. Then these rays will again be refracted and reflected and form a new generation of rays. Eventually, some of the rays will hit the camera and form an image.

There are algorithms that work according to this algorithm. But they are extremely ineffective, since most of the rays emanating from the source do not reach the camera. But an acceptable picture is obtained if you trace a large number of rays, which will take a very long time. This algorithm is called direct ray tracing.

The reverse ray tracing method can significantly reduce the search for light rays. This method was developed in the 1980s by Whitted and Kaye. In this method, rays are tracked not from sources, but from the camera. Thus, a certain number of rays are traced, equal to the resolution of the picture.

Let's assume that we have a camera and a screen located at a distance h from it. Let's divide the screen into squares. Next, we will take turns drawing rays from the camera to the center of each square (primary rays). Let's find the intersection of each such ray with scene objects and select the one closest to the camera among all intersections. Next, by applying the desired lighting model, you can obtain an image of the scene. This is the simplest ray tracing method. It only allows you to cut off invisible edges.

But we can go further. If we want to simulate phenomena such as reflection and refraction, we need to launch secondary rays from the closest intersection. For example, if the surface reflects light and it is perfectly flat, then it is necessary to reflect the primary ray from the surface and send a secondary ray in this direction. If the surface is uneven, then it is necessary to launch many secondary rays. This is not done in the program, as this will greatly slow down the tracing.

If the object is transparent, then it is necessary to construct a secondary ray such that when refracted it produces the original ray. Some bodies may have the property of diffuse refraction. In this case, not one, but many refracted rays are formed. As with reflection, I neglect this.

Thus, the primary ray, having found an intersection with the object, is generally divided into two rays (reflected and refracted). Then these two rays are divided into two more and so on.

The main reverse ray tracing procedure in my program is the Ray procedure. It has the following structure:

If the beam generation is equal to the maximum recursion depth, then we return the average brightness for all components. If not, then move on

We determine the nearest triangle with which the ray intersects.

If there is no such triangle, return the background color; if there is, move on.

If the surface with which the intersection was found is reflective, then we form a reflected ray and call the Ray procedure recursively with the ray generation increased by 1.

If the surface with which the intersection was found refracts, then we form a refracted ray and call the Ray procedure recursively with the ray generation increased by 1.

We determine the final illumination of the pixel, taking into account the location of the sources, the properties of the material, as well as the intensity of the reflected and refracted beam.

I've already discussed a number of limitations of the tracing method when we talked about diffuse refraction and uneven mirrors. Let's look at some others.

Only special objects - light sources - can illuminate the scene. They are point-like and cannot absorb, refract or reflect light.

The properties of a reflective surface consist of two components - diffuse and specular.

With diffuse reflection, only rays from light sources are taken into account. If the source illuminates a point through a mirror (with a bunny), then it is considered that the point is not illuminated.

Specularity is also divided into two components.

reflection - takes into account reflection from other objects (not light sources)

specular - takes into account glare from light sources

The tracing does not take into account the dependences on the wavelength of light:

refractive index

absorption coefficient

reflection coefficient

Since I'm not modeling diffuse reflection and refraction, I won't be able to get backlighting. Therefore, we introduce minimum background illumination. Often it simply allows you to significantly improve image quality.

The tracing algorithm allows you to draw very high-quality shadows. This will not require much reworking of the algorithm. You'll have to add something to it. When calculating the illumination of the points, it is necessary to place a “Shadow Front” in each of the light sources. The "shadow front" is a ray that checks whether there is anything between the point and the source. If there is an opaque object between them, then the point is in the shadow. This means that this source does not contribute to the final illumination of the point. If a transparent object is lying, then the intensity of the source decreases. Drawing shadows is very time-consuming. So, in some situations they are disabled.

My program has the ability to enable image smoothing. Antialiasing is about determining the color of a pixel. Not one ray, but four, is launched and the average color value of these rays is determined. If it is necessary to find the color of a pixel (i,j), then 4 rays are sent to points on the screen plane with coordinates (i-0.25,j-0.25), (i-0.25,j+0.25), (i+0.25,j-0.25) , (i+0.25,j+0.25).

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”