Whether to enable anisotropic filtering or not. Setting up an Nvidia graphics card for gaming

Subscribe
Join the “koon.ru” community!
In contact with:

Hi all! Today is very interesting article about fine-tuning the video card for high performance V computer games. Friends, agree that after installing the video card driver, you once opened the “Nvidia Control Panel” and saw unfamiliar words there: DSR, shaders, CUDA, clock pulse, SSAA, FXAA, and so on, and decided not to go there anymore. But nevertheless, it is possible and even necessary to understand all this, because performance directly depends on these settings. There is a misconception that everything in this sophisticated panel is configured correctly by default, unfortunately this is far from the case and experiments show that correct setting rewarded with a significant increaseframe rate.So get ready, we will understand streaming optimization, anisotropic filtering and triple buffering. In the end, you will not regret it and you will be rewarded in the formincreasing FPS in games.

Setting up an Nvidia graphics card for gaming

The pace of development of game production is gaining more and more momentum every day, as well as the main course monetary unit in Russia, and therefore the relevance of optimizing the operation of hardware, software and operating system has risen sharply. It is not always possible to keep your steel stallion in good shape through constant financial injections, so today we will talk about increasing the performance of a video card due to its detailed settings. In my articles, I have repeatedly written about the importance of installing a video driver, so , I think you can skip it. I'm sure you all know perfectly well how to do this, and all of you have already had it installed for a long time.

So, in order to get to the video driver management menu, right-click anywhere on the desktop and select “Nvidia Control Panel” from the menu that opens.

Then, in the window that opens, go to the “Manage 3D parameters” tab.

Here we will configure various parameters that affect the display of 3D images in games. It is not difficult to understand that in order to get maximum performance from the video card you will have to significantly reduce the image quality, so be prepared for this.

So, the first point " CUDA - GPUs" Here is a list of video processors from which you can select and it will be used by CUDA applications. CUDA (Compute Unified Device Architecture) is a parallel computing architecture used by all modern GPUs to increase computing performance.

Next point " DSR - Smoothness“We skip it because it is part of the “DSR - Degree” item settings, and it, in turn, needs to be disabled and now I will explain why.

DSR (Dynamic Super Resolution)– a technology that allows you to calculate images in games at a higher resolution, and then scale the resulting result to the resolution of your monitor. In order for you to understand why this technology was even invented and why we don’t need it to get maximum performance, I’ll try to give an example. Surely you have often noticed in games that small details such as grass and foliage very often flicker or ripple when moving. This is due to the fact that the lower the resolution, the smaller the number of sampling points for displaying fine details. DSR technology can correct this by increasing the number of points (the higher the resolution, the greater the number of sampling points). I hope this will be clear. In conditions of maximum productivity, this technology is not interesting to us as it spends quite a lot system resources. Well, with DSR technology disabled, adjusting the smoothness, which I wrote about just above, becomes impossible. In general, we turn it off and move on.

Next comes anisotropic filtering. Anisotropic filtering is a computer graphics algorithm created to improve the quality of textures that are tilted relative to the camera. That is, when using this technology, textures in games become clearer. If we compare antisotropic filtering with its predecessors, namely bilinear and trilinear filtering, then anisotropic filtering is the most voracious in terms of video card memory consumption. This item has only one setting - selecting a filter coefficient. It's not hard to guess that this function must be disabled.

Next point - vertical sync pulse. This is synchronizing the image with the monitor's refresh rate. If you enable this option, you can achieve the smoothest possible gameplay (image tearing is eliminated when the camera turns sharply), however, frame drops often occur below the monitor’s refresh rate. For getting maximum quantity frames per second, it is better to disable this option.

Pre-trained personnel virtual reality . The function for virtual reality glasses is not interesting to us, since VR is still far from everyday use by ordinary gamers. We leave it at the default - use the 3D application setting.

Background lighting shading. Makes scenes appear more realistic by softening the ambient light intensity of surfaces that are obscured by nearby objects. The function does not work in all games and is very resource intensive. Therefore, we take her to the digital mother.

Shader caching. When this feature is enabled, the CPU saves shaders compiled for the GPU to disk. If this shader is needed again, the GPU will take it directly from disk, without forcing the CPU to recompile this shader. It's not hard to guess that if you disable this option, performance will drop.

Maximum number of pre-prepared frames. The number of frames that the CPU can prepare before they are processed by the GPU. The higher the value, the better.

Multi-frame anti-aliasing (MFAA). One of the anti-aliasing technologies used to eliminate “jaggedness” at the edges of images. Any anti-aliasing technology (SSAA, FXAA) is very demanding on the GPU (the only question is the degree of gluttony). Turn it off.

Stream optimization. By enabling this feature, an application can use multiple CPUs at once. If the old application does not work correctly, try setting the “Auto” mode or disabling this function altogether.

Power management mode. There are two options available - adaptive mode and maximum performance mode. During adaptive mode, power consumption depends directly on the GPU load. This mode is mainly needed to reduce power consumption. During maximum performance mode, as you might guess, the highest possible level of performance and power consumption is maintained, regardless of the GPU load. Let's put the second one.

Anti-aliasing – FXAA, Anti-aliasing – gamma correction, Anti-aliasing – parameters, Anti-aliasing – transparency, Anti-aliasing – mode. I already wrote about smoothing a little higher. Turn everything off.

Triple buffering. A type of double buffering; an image output method that avoids or reduces artifacts (image distortion). If we talk in simple words, then increases productivity. BUT! This thing only works in conjunction with vertical sync, which, as you remember, we disabled before. Therefore, we also disable this parameter; it is useless for us.

Texture filtering.

Filtering solves the problem of determining the color of a pixel based on existing texels from a texture image.

The simplest method of applying textures is called point sampling(single point-sampling). Its essence is that for each pixel that makes up the polygon, one texel is selected from the texture image that is closest to the center of the light spot. An error occurs because the color of a pixel is determined by several texels, but only one was selected.

This method is very inaccurate and the result of its use is the appearance of irregularities. Namely, whenever pixels are larger in size than texels, a flickering effect is observed. This effect occurs if part of the polygon is far enough from the observation point that many texels are superimposed on the space occupied by one pixel. Note that if the polygon is very close to the observation point and texels are larger in size than pixels, another type of image quality degradation is observed. IN in this case, the image starts to look blocky. This effect occurs when the texture may be large enough, but the limitation in available screen resolution prevents the original image from being properly represented.

Second method - bilinear filtering(Bi-Linear Filtering) consists of using interpolation technology. To determine the texels that should be used for interpolation, the basic shape of the light spot is used - a circle. Essentially, a circle is approximated by 4 texels. Bilinear filtering is a technique for eliminating image distortions (filtering), such as "blockiness" of textures when they are enlarged. When slowly rotating or moving an object (approaching/moving away), “jumping” of pixels from one place to another may be noticeable, i.e. blockiness appears. To avoid this effect, bilinear filtering is used, which uses a weighted average of the color value of four adjacent texels to determine the color of each pixel and, as a result, determines the color of the overlay texture. The resulting pixel color is determined after three mixing operations are performed: first, the colors of two pairs of texels are mixed, and then the two resulting colors are mixed.

Main disadvantage bilinear filtering is that the approximation is performed correctly only for polygons that are located parallel to the screen or observation point. If the polygon is rotated at an angle (and this is in 99% of cases), the wrong approximation is used, since an ellipse should be approximated.

"Depth aliasing" errors arise from the fact that objects further away from the viewpoint appear smaller on the screen. If an object moves and moves away from the viewing point, the texture image superimposed on the shrinking object becomes more and more compressed. Eventually, the texture image applied to the object becomes so compressed that rendering errors occur. These rendering errors are especially problematic in animation, where such motion artifacts cause flickering and slow-motion effects in parts of the image that should be stationary and stable.

The following rectangles with bilinear texturing can serve as an illustration of the described effect:

Rice. 13.29. Shading an object using the bilinear filtering method. The appearance of "depth-aliasing" artifacts, which result in several squares merging into one.

To avoid errors and simulate the fact that objects at a distance appear less detailed than those closer to the viewing point, a technique known as mip-mapping. In short, mip-mapping is the overlay of textures with different degrees or levels of detail, when, depending on the distance to the observation point, a texture with the required detail is selected.

A mip-texture (mip-map) consists of a set of pre-filtered and scaled images. In an image associated with a mip-map layer, a pixel is represented as the average of four pixels from the previous layer with more high resolution. Hence, the image associated with each mip-texture level is four times smaller in size than the previous mip-map level.

Rice. 13.30. Images associated with each mip-map level of the wavy texture.

From left to right we have mip-map levels 0, 1, 2, etc. The smaller the image gets, the more detail is lost, until near the end when nothing is visible except a blurry blur of gray pixels.

Level of Detail, or simply LOD, is used to determine which mip-map level (or level of detail) should be selected to apply a texture to an object. LOD must correspond to the number of texels overlaid per pixel. For example, if texturing occurs with a ratio close to 1:1, then the LOD will be 0, which means the mip-map level with the highest resolution will be used. If 4 texels overlap one pixel, then the LOD will be 1 and the next mip level with lower resolution will be used. Usually, when moving away from the observation point, the object that deserves the most attention has more high value LOD.

While mip-texturing solves the problem of depth-aliasing errors, its use can cause other artifacts to appear. As the object moves further and further from the observation point, a transition occurs from a low mip-map level to a high one. When an object is in a transitional state from one mip-map level to another, special type visualization errors known as "mip-banding" - banding or lamination, i.e. clearly visible boundaries of transition from one mip-map level to another.

Rice. 13.31. The rectangular tape consists of two triangles textured with a wave-like image, where "mip-banding" artifacts are indicated by red arrows.

The problem of "mip-banding" errors is especially acute in animation, due to the fact that the human eye is very sensitive to displacements and can easily notice the place of a sharp transition between filtering levels when moving around an object.

Trilinear filtering(trilinear filtering) is a third method that removes mip-banding artifacts that occur when mip-texturing is used. With trilinear filtering, to determine the color of a pixel, the average color value of eight texels is taken, four of two adjacent textures are taken, and as a result of seven mixing operations, the pixel color is determined. When using trilinear filtering, it is possible to display a textured object with smooth transitions from one mip level to the next, which is achieved by determining the LOD by interpolating two adjacent mip-map levels. Thus solving most of the problems associated with mip-texturing and errors due to incorrect calculation of scene depth ("depth aliasing").

Rice. 13.32. Pyramid MIP-map

An example of using trilinear filtering is given below. Here again the same rectangle is used, textured with a wave-like image, but with smooth transitions from one mip level to the next through the use of trilinear filtering. Note that there are no noticeable rendering errors.

Rice. 13.33. A rectangle textured with a wave-like image is rendered on the screen using mip-texturing and trilinear filtering.

There are several ways to generate MIP textures. One of them is to simply prepare them in advance using graphics packages like Adobe PhotoShop. Another way is to generate MIP textures on the fly, i.e. during program execution. Pre-prepared MIP textures mean an additional 30% of disk space for textures in the base installation of the game, but allow more flexible methods for managing their creation and allow you to add various effects and additional details different MIP levels.

It turns out that trilinear mipmapping is the best that can be?

Of course not. It can be seen that the problem is not only in the ratio of pixel and texel sizes, but also in the shape of each of them (or, to be more precise, in the ratio of shapes).

The mip-texturing method works best for polygons that are directly face-to-face with the viewpoint. However, polygons that are oblique with respect to the observation point bend the overlay texture so that pixels can be overlaid various types and quadratic in shape areas of the texture image. The mip texturing method does not take this into account and the result is that the texture image is too blurry, as if the wrong texels were used. To solve this problem, you need to sample more of the texels that make up the texture, and you need to select these texels taking into account the "mapped" shape of the pixel in texture space. This method is called anisotropic filtering(“anisotropic filtering”). Normal mip texturing is called "isotropic" (isotropic or uniform) because we are always filtering together square regions of texels. Anisotropic filtering means that the shape of the texel region we use changes depending on the circumstances.

To understand the difference between different filtering algorithms, you must first understand what filtering is trying to do. Your screen has a specific resolution and is made up of what are called pixels. Resolution is determined by the number of pixels. Your 3D board must determine the color of each of these pixels. The basis for determining the color of pixels are texture images that are superimposed on polygons located in three-dimensional space. Texture images are made up of pixels called texels. Essentially, these texels are pixels from a 2D image that are superimposed on a 3D surface. Main question is this: what texel (or what texels) determines the color of a pixel on the screen?

Imagine the following problem: let's say your screen is a slab with a lot of holes (let's assume the pixels have round shape). Each hole is a pixel. If you look through the hole, you will see what color it is in relation to the three-dimensional scene behind the slab. Now imagine a beam of light passing through one of these holes and hitting the textured polygon located behind it. If the polygon is located parallel to the screen (i.e., our imaginary plate with holes), then the light beam hitting it forms a round light spot (see Fig. 1). Now, using your imagination again, let’s make the polygon rotate around its axis and the simplest knowledge will tell you that the shape of the light spot will change, and instead of being round it will become elliptical (see Fig. 2 and 3). You're probably wondering what this spot of light has to do with the problem of determining the color of a pixel. Elementarily, all the polygons located in this spot of light determine the color of the pixel. Everything we have discussed here is the basic knowledge that you need to know in order to understand various algorithms filtration.

You can look at the different shapes of the light spot using the following examples:


Rice. 1

Rice. 2


Rice. 3

1.Point Sampling

Point Sampling - point sampling. This is the simplest way to determine the color of a pixel based on a texture image. You just need to select the texel closest to the center of the light spot. Of course, you are making a mistake, since the color of a pixel is determined by several texels, and you only selected one. You also don't take into account the fact that the shape of the light spot may change.

The main advantage of this filtering method is the low requirements for memory bandwidth, because to determine the color of a pixel you need to select just one texel from texture memory.

The main disadvantage is the fact that when the polygon is located closer to the screen (or viewing point) the number of pixels will be greater than the number of texels, resulting in blockiness and overall deterioration in image quality.

However, the main purpose of using filtering is not to improve quality while reducing the distance from the observation point to the polygon, but to get rid of the effect of incorrectly calculating the depth of the scene (depth aliasing).

2. Bi-Linear Filtering

Bi-Linear Filtering - bilinear filtering. Consists of using interpolation technology. In other words, for our example, to determine the texels that should be used for interpolation, the basic shape of the light spot is used - a circle. Essentially, a circle is approximated by 4 texels. This filtering method is significantly better than point sampling because it partly takes into account the shape of the light spot and uses interpolation. This means that if a polygon gets too close to the screen or viewpoint, more texels will be required for interpolation than are actually available. The result is a great-looking blurry image, but it's just by-effect.

The main disadvantage of bilinear filtering is that the approximation is performed correctly only for polygons that are located parallel to the screen or observation point. If the polygon is turned at an angle (and this is in 99% of cases), then you are using the wrong approximation. The problem is that you are using a circle approximation when you should be approximating an ellipse. The main problem is that bilinear filtering requires reading 4 texels from texture memory to determine the color of each pixel displayed on the screen, which means the memory bandwidth requirements increase four times compared to point-by-point filtering.

3. Tri-Linear filtering

Tri-Linear filtering - trilinear filtering is a symbiosis of mip-texturing and bilinear filtering. Essentially, you are doing bilinear filtering at two mip levels, which gives you 2 texels, one for each mip level. The color of the pixel that should be displayed on the screen is determined by interpolating the colors of two mip textures. Essentially, mip levels are pre-calculated smaller versions of the original texture, meaning we get a better approximation of the texels located in the light spot.

This technique provides better filtration, but only has small advantages before bilinear filtering. The memory bandwidth requirement is double that of bilinear filtering since you need to read 8 texels from texture memory. Using mipmapping provides better approximation (using more texels located in the light spot) across all texels in the light spot, thanks to the use of pre-calculated mip textures.

4. Anisotropic filtering

Anisotropic filtering - anisotropic filtering. So to really get good results, you have to remember that all the texels in the light spot determine the color of the pixel. You must also remember that the shape of the light spot changes as the position of the polygon changes relative to the observation point. Up to this point we have only used 4 texels instead of all the texels covered by the light spot. This means that all these filtering techniques produce distorted results when the polygon is located further from the screen or from the observation point, because you are not using enough information. In fact, you are over-filtering in one direction and not filtering enough in all others. The only advantage of all the filtering described above is the fact that when approaching the viewing point, the image appears less blocky (although this is just a side effect). Thus, to achieve the best quality, we must use all the texels covered by the light spot and average their value. However, this seriously affects bandwidth memory - there may simply not be enough of it, and performing such a sample with averaging is a non-trivial task.

You can use a variety of filters to approximate the shape of the light spot as an ellipse for several possible angles of the polygon relative to the point of view. There are filtering techniques that use 16 to 32 texels from a texture to determine the color of a pixel. True, using such a filtering technique requires significantly greater memory bandwidth, and this is almost always impossible in existing systems visualization without the use of expensive memory architectures. In visualization systems using tiles 1, memory bandwidth resources are significantly saved, which allows the use of anisotropic filtering. Visualization using anisotropic filtering provides best quality images, due to better depth of detail and more accurate representation of textures superimposed on polygons that are not parallel to the screen or viewing point.

1 Tile (tile) - tile or image fragment. In fact, a tile is a section of an image, usually 32 by 32 pixels in size; Sorting is carried out across these areas in order to determine which polygons falling into this tile are visible. Tiled technology is implemented in VideoLogic/NEC chipsets.

Additional information on this topic can be read and.

Help in preparing the material was provided by Kristof Beets(PowerVR Power)

With the advent of 3D games, problems began to appear that did not exist in 2D games: after all, now you need to display a three-dimensional image on a flat monitor. If the object is parallel to the screen plane near it, there are no problems: one pixel corresponds to one texel (a texel is a pixel of a two-dimensional image superimposed on a 3D surface). But what to do if the object is tilted or is far away? After all, then there are several texels per pixel, and since the monitor has a limited number of pixels, the color of each has to be calculated from several texels through a certain process - filtering.


To make things easier to understand, let’s imagine that each pixel is a square “hole” in the monitor, we shoot “rays of light” from the eyes, and the texels are located on a square grid behind the monitor. If we place the grating parallel to the monitor immediately behind it, then the light from one pixel will cover only one texel. Now we will start to move the grate away - what will we get? The fact that our spot of light from a pixel will already cover more than one texel. Now let's rotate the lattice and get the same thing: a spot from one pixel will cover many texels. But a pixel can have one color, and if there are many texels in it, then we need an algorithm with which we will determine its color - it is called texture filtering.


This is the simplest filtering algorithm: it is based on the fact that for the pixel color we take the color of the texel that is closest to the center of the light spot from the pixel. The advantage of this method is obvious - it puts the least amount of load on the video card. There are also a lot of disadvantages - the color of one central texel can differ significantly from the color of dozens and even hundreds of other texels that fall into the pixel spot. In addition, the shape of the spot itself can change significantly, but its center can remain in the same place, and as a result, the color of the pixel will not change. Well, the main disadvantage is the problem of “blockiness”: when there are few texels per pixel (that is, an object next to the player), then we get that with this filtering method, a fairly large part of the image is filled with one color, which leads to clearly visible “ blocks" of the same color on the screen. The final picture quality is... simply terrible:


So it is not surprising that such filtering is no longer used today.


With the development of video cards, their power began to increase, so game developers went further: if you take one texel for the color of a pixel, it turns out bad. Okay - let's take the average color from 4 texels and call it bilinear filtering? On the one hand, everything will get better - blockiness will disappear. But enemy number two will come - blurriness of the image near the player: this is due to the fact that interpolation requires more texels than four.

But the main problem not this: bilinear filtering works well when the object is parallel to the screen: then you can always select 4 texels and get an “average” color. But 99% of the textures are tilted relative to the player, and it turns out that we are approximating 4 rectangular parallelepiped(or trapezoid) as 4 squares, which is incorrect. And the more the texture is tilted, the lower the color accuracy and the stronger the blur:


Okay, the game developers said - since 4 texels are not enough, let’s take two times four, and to more accurately match the color we will use mip-texturing technology. As I already wrote above, the further the texture is from the player, the more texels there will be in a pixel, and the more difficult it is for the video card to process the image. MIP texturing means storing the same texture in different resolutions: for example, if the original texture size is 256x256, then copies of it are stored in memory in 128x128, 64x64, and so on, up to 1x1:


And now, for filtering, not only the texture itself is taken, but also the mipmap: depending on whether the texture is further or closer from the player, either a smaller or larger texture mipmap is taken, and already on it the 4 texels closest to the center of the pixel are taken, and a bilinear analysis is carried out filtration. Next, 4 texels closest to the pixel of the original texture are taken, and again the “average” color is obtained. After that, the “average” color is taken from the average colors of the mipmap and the original texture, and assigned to the pixel - this is how the trilinear filtering algorithm works. As a result, it loads the video card somewhat more than bilinear filtering (the mipmap also needs to be processed), but the image quality is better:


As you can see, trilinear filtering is seriously better than bilinear and even more so point filtering, but the image still gets blurry at long distances. And the fuzzy picture is due to the fact that we do not take into account the fact that the texture can be tilted relative to the player - and this is precisely the problem that anisotropic filtering solves. Briefly, the principle of operation of anisotropic filtering is as follows: a MIP texture is taken, set across the viewing direction, after which its color values ​​are averaged with the color of a certain number of texels along the viewing direction. The number of texels varies from 16 (for x2 filtering) to 128 (for x16). To put it simply, instead of a square filter (as in bilinear filtering), an elongated one is used, which allows you to select better desired color for screen pixel. Since there can be a million or even more pixels on the screen, and each texel weighs at least 32 bits (32-bit color), anisotropic filtering requires enormous video memory bandwidth - tens of gigabytes per second. Such large memory requirements are reduced due to texture compression and caching, but still on video cards with DDR memory or a 64-bit bus, the difference between trilinear and x16 anisotropic filtering can reach 10-15% fps, but the picture after such filtering turns out to be the best :

Texturing is the most important element of today's 3D applications, without it many 3D models lose much of their visual appeal. However, the process of applying textures to surfaces is not without artifacts and appropriate methods for their suppression. In the world 3D games Every now and then you come across specialized terms like “mip mapping”, “trilinear filtering”, etc., which specifically refer to these methods.

A special case of the aliasing effect discussed earlier is the aliasing effect of textured surfaces, which, unfortunately, cannot be removed by the multi- or supersampling methods described above.

Imagine black and white chessboard large, almost infinite in size. Let's say we draw this board on the screen and look at it at a slight angle. For sufficiently distant areas of the board, the size of the cells will inevitably begin to decrease to the size of one pixel or less. This is the so-called optical texture reduction (minification). A “struggle” will begin between the texture pixels for possession of screen pixels, which will lead to unpleasant flickering, which is one of the varieties of the aliasing effect. Increasing the screen resolution (real or effective) helps only a little, because for objects far enough away the texture details still become smaller than the pixels.

On the other hand, the parts of the board closest to us take up a large screen area, and you can see huge pixels of the texture. This is called optical texture magnification (magnification). Although this problem is not so acute, it also needs to be dealt with to reduce the negative effect.

To solve texturing problems, so-called texture filtering is used. If you look at the process of drawing a three-dimensional object with a superimposed texture, you can see that calculating the color of a pixel goes “in reverse” - first, a screen pixel is found where a certain point of the object will be projected, and then for this point all the texture pixels falling within the her. Selecting texture pixels and combining them (averaging) to obtain the final screen pixel color is called texture filtering.

During the texturing process, each pixel of the screen is assigned a coordinate within the texture, and this coordinate is not necessarily an integer. Moreover, a pixel corresponds to a certain area in the texture image, which may contain several pixels from the texture. We will call this area the image of a pixel in the texture. For the nearby parts of our board, the screen pixel becomes significantly smaller than the texture pixel and, as it were, is located inside it (the image is contained inside the texture pixel). For remote ones, on the contrary, every pixel falls a large number of texture points (the image contains several texture points). The pixel image may have different shape and in the general case is an arbitrary quadrilateral.

Let's consider various methods filtering textures and their variations.

Nearest neighbor

In this, the simplest, method, the pixel color is simply chosen to be the color of the nearest corresponding texture pixel. This method is the fastest, but also the least quality. In fact, this is not even a special filtering method, but simply a way to select at least some texture pixel that corresponds to a screen pixel. It was widely used before the advent of hardware accelerators, whose widespread use made it possible to use better methods.

Bilinear filtering

Bilinear filtering finds the four texture pixels closest to the current point on the screen and the resulting color is determined as the result of mixing the colors of these pixels in some proportion.

Nearest neighbor filtering and bilinear filtering work quite well when, firstly, the degree of texture reduction is small, and secondly, when we see the texture at a right angle, i.e. frontally. What is this connected with?

If we consider, as described above, the “image” of a screen pixel in the texture, then in the case of a strong reduction it will include a lot of texture pixels (up to all pixels!). Also, if we look at the texture from an angle, this image will be greatly elongated. In both cases, the described methods will not work well, since the filter will not "capture" the corresponding texture pixels.

To solve these problems, so-called mip mapping and anisotropic filtering are used.

Mip mapping

With significant optical reduction, a point on the screen can correspond to quite a lot of texture pixels. This means that the implementation of even the best filter will require quite a lot of time to average all points. However, the problem can be solved by creating and storing versions of the texture in which the values ​​are averaged in advance. And at the rendering stage, look for the desired version of the original texture for the pixel and take the value from it.

The term mipmap comes from the Latin multum in parvo - much in little. When using this technology, in addition to the texture image, the memory of the graphics accelerator stores a set of its reduced copies, each new one being exactly half the size of the previous one. Those. for a texture of size 256x256, images of 128x128, 64x64, etc., up to 1x1 are additionally stored.

Next, an appropriate mipmap level is selected for each pixel (the larger the size of the pixel “image” in the texture, the smaller the mipmap is taken). Next, the values ​​in the mipmap can be averaged bilinearly or using the nearest neighbor method (as described above) and additionally filtering occurs between adjacent mipmap levels. This type of filtering is called trilinear. It gives very high-quality results and is widely used in practice.


Figure 9. Mipmap levels

However, the problem with the "elongated" image of the pixel in the texture remains. This is precisely why our board looks very fuzzy from a distance.

Anisotropic filtering

Anisotropic filtering is a texture filtering process that specifically takes into account the case of an elongated pixel image in a texture. In fact, instead of a square filter (as in bilinear filtering), an elongated one is used, which allows you to better select the desired color for a screen pixel. This filtering is used in conjunction with mipmapping and produces very high-quality results. However, there are also disadvantages: the implementation of anisotropic filtering is quite complex and when enabled, the drawing speed drops significantly. Anisotropic filtering is supported by the latest generations of NVidia and ATI GPUs. And with different levels anisotropy - the higher this level, the more “elongated” pixel images can be processed correctly and the better the quality.

Comparison of filters

The result is the following: to suppress texture aliasing artifacts, several filtering methods are supported in hardware, differing in their quality and speed. The simplest filtering method is the nearest neighbor method (which does not actually fight artifacts, but simply fills the pixels). Nowadays, bilinear filtering together with mip mapping or trilinear filtering is most often used. IN Lately GPUs began to support the highest quality filtering mode - anisotropic filtering.

Bump mapping

Bump mapping is a type of graphic special effects that is designed to create the impression of “rough” or bumpy surfaces. Recently, the use of bump mapping has become almost a standard for gaming applications.

The main idea behind bump mapping is to use textures to control how light interacts with the surface of an object. This allows you to add small details without increasing the number of triangles. In nature, we distinguish small uneven surfaces by shadows: any bump will be light on one side and dark on the other. In fact, the eye may not be able to detect changes in surface shape. This effect is used in bump mapping technology. One or more additional textures are applied to the object's surface and used to calculate the illumination of the object's points. Those. the surface of the object does not change at all, only the illusion of irregularities is created.

There are several methods of bump mapping, but before we look at them, we need to figure out how to actually define bumps on the surface. As mentioned above, additional textures are used for this, and they can be of different types:

Normal map. In this case, each pixel of the additional texture stores a vector perpendicular to the surface (normal), encoded as a color. Normals are used to calculate illumination.

Displacement map. A displacement map is a grayscale texture where each pixel stores a displacement from the original surface.

These textures are prepared by 3D model designers along with geometry and basic textures. There are also programs that allow you to obtain normal or displacement maps automatically

Pre-calculated bump mapping

Textures, which will store information about the surface of an object, are created in advance, before the rendering stage, by darkening some texture points (and therefore the surface itself) of the object and highlighting others. Next, while drawing, the usual texture is used.

This method does not require any algorithmic tricks during drawing, but, unfortunately, changes in the illumination of surfaces do not occur when the positions of the light sources or the movement of the object change. And without this, a truly successful simulation of an uneven surface cannot be created. Similar methods are used for static parts of the scene, often for level architecture, etc.

Bump mapping using embossing (Emboss bump mapping)

This technology was used on the first graphics processors (NVidia TNT, TNT2, GeForce). A displacement map is created for the object. Drawing occurs in two stages. At the first stage, the displacement map is added to itself pixel by pixel. In this case, the second copy is shifted a short distance in the direction of the light source. This produces the following effect: positive difference values ​​are determined by illuminated pixels, negative values ​​by pixels in the shadow. This information is used to change the color of the underlying texture pixels accordingly.

Bump mapping using embossing does not require hardware that supports pixel shaders, but it does not work well for relatively large surface irregularities. Also, objects do not always look convincing; this greatly depends on the angle at which you look at the surface.

Pixel bump mapping

Pixel bump mapping is currently the pinnacle of development of such technologies. In this technology, everything is calculated as honestly as possible. The pixel shader is given a normal map as input, from which the normal values ​​for each point of the object are taken. The normal value is then compared to the direction of the light source and the color value is calculated.

This technology is supported in equipment starting with GeForce2 level video cards.

So, we have seen how we can use the peculiarities of human perception of the world to improve the quality of images created by 3D games. Happy owners of the latest generation of video cards NVidia GeForce, ATI Radeon(however, and not only the latter) can independently play with some of their described effects, since the settings for de-aliasing and anisotropic filtering are available from the driver options. These and other methods, which are beyond the scope of this article, are successfully implemented by game developers in new products. In general, life gets better. Something else will happen!

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”