Which anisotropic filtering is better? What is texture filtering in games

Subscribe
Join the “koon.ru” community!
In contact with:

IN modern games More and more graphic effects and technologies are being used to improve the picture. However, developers usually don’t bother explaining what exactly they are doing. When you don't have the most powerful computer, you have to sacrifice some of the capabilities. Let's try to look at what the most common graphics options mean to better understand how to free up PC resources with minimal impact on graphics.

Anisotropic filtering

When any texture is displayed on the monitor not in its original size, it is necessary to insert additional pixels into it or, conversely, remove the extra ones. To do this, a technique called filtering is used.

Bilinear filtering is the simplest algorithm and requires less computing power, but also produces the worst results. Trilinear adds clarity, but still generates artifacts. Anisotropic filtering is considered the most advanced method for eliminating noticeable distortions on objects that are strongly inclined relative to the camera. Unlike the two previous methods, it successfully combats the gradation effect (when some parts of the texture are blurred more than others, and the boundary between them becomes clearly visible). When using bilinear or trilinear filtering, the texture becomes more and more blurry as the distance increases, but anisotropic filtering does not have this drawback.

Considering the amount of data being processed (and there may be many high-resolution 32-bit textures in the scene), anisotropic filtering particularly demanding on memory bandwidth. Traffic can be reduced primarily through texture compression, which is now used everywhere. Previously, when it was not practiced so often, and the throughput of video memory was much lower, anisotropic filtering significantly reduced the number of frames. On modern video cards, it has almost no effect on fps.

Anisotropic filtering has only one setting - filter factor (2x, 4x, 8x, 16x). The higher it is, the clearer and more natural the textures look. Typically, with a high value, small artifacts are visible only on the outermost pixels of tilted textures. Values ​​of 4x and 8x are usually quite enough to get rid of the lion's share of visual distortion. Interestingly, when moving from 8x to 16x, the performance penalty will be quite small even in theory, since additional processing will only be needed for a small number of previously unfiltered pixels.

Shaders

Shaders are small programs that can perform certain manipulations with a 3D scene, for example, changing lighting, applying texture, adding post-processing and other effects.

Shaders are divided into three types: vertex shaders operate with coordinates, geometry shaders can process not only individual vertices, but also entire geometric shapes consisting of a maximum of 6 vertices, pixel shaders work with individual pixels and their parameters .

Shaders are mainly used to create new effects. Without them, the set of operations that developers could use in games is very limited. In other words, adding shaders made it possible to obtain new effects that were not included in the video card by default.

Shaders work very productively in parallel mode, and that is why modern graphics adapters have so many stream processors, which are also called shaders. For example, in GeForce GTX 580 of them, as many as 512 pieces.

Parallax mapping

Parallax mapping is a modified version of the well-known bumpmapping technique, used to add relief to textures. Parallax mapping does not create 3D objects in the usual sense of the word. For example, a floor or wall in a game scene will appear rough while actually being completely flat. The relief effect here is achieved only through manipulation of textures.

The source object does not have to be flat. The method works on various game objects, but its use is desirable only in cases where the height of the surface changes smoothly. Sudden changes are processed incorrectly and artifacts appear on the object.

Parallax mapping significantly saves computer computing resources, since when using analogue objects with an equally detailed 3D structure, the performance of video adapters would not be enough to render scenes in real time.

The effect is most often used on stone pavements, walls, bricks and tiles.

Anti-Aliasing

Before DirectX 8, anti-aliasing in games was done using SuperSampling Anti-Aliasing (SSAA), also known as Full-Scene Anti-Aliasing (FSAA). Its use led to a significant decrease in performance, so with the release of DX8 it was immediately abandoned and replaced with Multisample Anti-Aliasing (MSAA). Although this method gave worse results, it was much more productive than its predecessor. Since then, more advanced algorithms have appeared, such as CSAA.

Considering that over the past few years the performance of video cards has noticeably increased, both AMD and NVIDIA have again returned support for SSAA technology to their accelerators. However, it will not be possible to use it even now in modern games, since the number of frames/s will be very low. SSAA will be effective only in projects from previous years, or in current ones, but with modest settings for other graphic parameters. AMD has implemented SSAA support only for DX9 games, but in NVIDIA SSAA also functions in DX10 and DX11 modes.

The principle of smoothing is very simple. Before the frame is displayed on the screen, certain information is calculated not in its native resolution, but in an enlarged one and a multiple of two. Then the result is reduced to the required size, and then the “ladder” along the edges of the object becomes less noticeable. The higher the original image and the smoothing factor (2x, 4x, 8x, 16x, 32x), the less jaggies there will be on the models. MSAA, unlike FSAA, smoothes only the edges of objects, which significantly saves video card resources, however, this technique can leave artifacts inside polygons.

Previously, Anti-Aliasing always significantly reduced fps in games, but now it affects the number of frames only slightly, and sometimes has no effect at all.

Tessellation

Using tessellation in computer model the number of polygons increases by an arbitrary number of times. To do this, each polygon is divided into several new ones, which are located approximately the same as original surface. This method allows you to easily increase the detail of simple 3D objects. At the same time, however, the load on the computer will also increase, and in some cases small artifacts cannot be ruled out.

At first glance, tessellation can be confused with Parallax mapping. Although these are completely different effects, since tessellation actually changes geometric shape object, and not just simulates relief. In addition, it can be used for almost any object, while the use of Parallax mapping is very limited.

Tessellation technology has been known in cinema since the 80s, but it began to be supported in games only recently, or rather after graphics accelerators finally reached the required level of performance at which it can be performed in real time.

For the game to use tessellation, it requires a video card that supports DirectX 11.

Vertical Sync

V-Sync is the synchronization of game frames with the vertical scan frequency of the monitor. Its essence lies in the fact that a fully calculated game frame is displayed on the screen at the moment the image is updated on it. It is important that the next frame (if it is already ready) will also appear no later and no earlier than the output of the previous one ends and the next one begins.

If the monitor refresh rate is 60 Hz, and the video card has time to render the 3D scene with at least the same number of frames, then each monitor refresh will display a new frame. In other words, at an interval of 16.66 ms, the user will see a complete update of the game scene on the screen.

It should be understood that when vertical synchronization is enabled, the fps in the game cannot exceed the vertical scan frequency of the monitor. If the number of frames is lower than this value (in our case, less than 60 Hz), then in order to avoid performance losses it is necessary to activate triple buffering, in which frames are calculated in advance and stored in three separate buffers, which allows them to be sent to the screen more often.

The main task of vertical synchronization is to eliminate the effect of a shifted frame, which occurs when the lower part of the display is filled with one frame, and the upper part with another, shifted relative to the previous one.

Post-processing

This is the general name for all the effects that are superimposed on a ready-made frame of a fully rendered 3D scene (in other words, on a two-dimensional image) to improve the quality of the final picture. Post-processing uses pixel shaders and is used in cases where additional effects require full information about the whole scene. Such techniques cannot be applied in isolation to individual 3D objects without causing artifacts to appear in the frame.

High dynamic range (HDR)

An effect often used in game scenes with contrasting lighting. If one area of ​​the screen is very bright and another is very dark, a lot of the detail in each area is lost and they look monotonous. HDR adds more gradation to the frame and allows for more detail in the scene. To use it, you usually have to work with a wider range of colors than standard 24-bit precision can provide. Preliminary calculations occur in high precision (64 or 96 bits), and only final stage The image is adjusted to 24 bits.

HDR is often used to realize the effect of vision adaptation when a hero in games emerges from a dark tunnel onto a well-lit surface.

Bloom

Bloom is often used in conjunction with HDR, and it also has quite a few close relative- Glow, this is why these three techniques are often confused.

Bloom simulates the effect that can be seen when shooting very bright scenes with conventional cameras. In the resulting image, the intense light appears to take up more volume than it should and to “climb” onto objects even though it is behind them. When using Bloom, additional artifacts in the form of colored lines may appear on the borders of objects.

Film Grain

Grain is an artifact that occurs in analog TV with a poor signal, on old magnetic videotapes or photographs (in particular, digital images taken in low light). Players often disable this effect because it somewhat spoils the picture rather than improves it. To understand this, you can run Mass Effect in each mode. In some horror films, such as Silent Hill, noise on the screen, on the contrary, adds atmosphere.

Motion Blur

Motion Blur - the effect of blurring the image when the camera moves quickly. It can be successfully used when the scene needs to be given more dynamics and speed, therefore it is especially in demand in racing games. In shooters, the use of blur is not always perceived unambiguously. Correct Application Motion Blur can add a cinematic feel to what's happening on the screen.

The effect will also help, if necessary, to disguise the low frame rate and add smoothness to the gameplay.

SSAO

Ambient occlusion is a technique used to make a scene photorealistic by creating more believable lighting of the objects in it, which takes into account the presence of other objects nearby with their own characteristics of light absorption and reflection.

Screen Space Ambient Occlusion is a modified version of Ambient Occlusion and also simulates indirect lighting and shading. The emergence of SSAO was due to the fact that when modern level GPU Ambient Occlusion could not be used to render scenes in real time. Behind increased productivity in SSAO you have to pay for lower quality, but even this is enough to improve the realism of the picture.

SSAO works according to a simplified scheme, but it has many advantages: the method does not depend on the complexity of the scene, does not use RAM, can function in dynamic scenes, does not require frame pre-processing and loads only the graphics adapter without consuming CPU resources.

Cel shading

Games with the Cel shading effect began to be made in 2000, and first of all they appeared on consoles. On PCs, this technique became truly popular only a couple of years later, after the release of the acclaimed shooter XIII. With the help of Cel shading, each frame practically turns into a hand-drawn drawing or a fragment from a children's cartoon.

Comics are created in a similar style, so the technique is often used in games related to them. Among the latest well-known releases is the shooter Borderlands, where Cel shading is visible to the naked eye.

Features of the technology are the use of a limited set of colors, as well as the absence of smooth gradients. The name of the effect comes from the word Cel (Celluloid), i.e. transparent material(film) on which animated films are drawn.

Depth of field

Depth of field is the distance between the near and far edges of space within which all objects will be in focus, while the rest of the scene will be blurred.

To a certain extent, depth of field can be observed simply by focusing on an object close in front of your eyes. Anything behind it will be blurred. The opposite is also true: if you focus on distant objects, everything in front of them will turn out blurry.

You can see the effect of depth of field in an exaggerated form in some photographs. This is the degree of blur that is often attempted to be simulated in 3D scenes.

In games using Depth of field, the gamer usually feels a stronger sense of presence. For example, when looking somewhere through the grass or bushes, he sees only small fragments of the scene in focus, which creates the illusion of presence.

Performance Impact

To find out how enabling certain options affects performance, we used the gaming benchmark Heaven DX11 Benchmark 2.5. All tests were carried out on the system Intel Core 2 Duo e6300, GeForce GTX460 at a resolution of 1280x800 pixels (except for vertical sync, where the resolution was 1680x1050).

As already mentioned, anisotropic filtering has virtually no effect on the number of frames. The difference between anisotropy disabled and 16x is only 2 frames, so we always recommend setting it to maximum.

Anti-aliasing in Heaven Benchmark reduced fps more significantly than we expected, especially in the heaviest 8x mode. However, since 2x is enough to noticeably improve the picture, we recommend choosing this option if playing at higher levels is uncomfortable.

Tessellation, unlike the previous parameters, can take on an arbitrary value in each individual game. In Heaven Benchmark, the picture without it deteriorates significantly, and at the maximum level, on the contrary, it becomes a little unrealistic. Therefore, you should set intermediate values ​​- moderate or normal.

A higher resolution was chosen for vertical sync so that fps is not limited by the vertical refresh rate of the screen. As expected, the number of frames throughout almost the entire test with synchronization turned on remained firmly at around 20 or 30 fps. This is due to the fact that they are displayed simultaneously with the screen refresh, and with a scanning frequency of 60 Hz this can be done not with every pulse, but only with every second (60/2 = 30 frames/s) or third (60/3 = 20 frames/s). When V-Sync was turned off, the number of frames increased, but characteristic artifacts appeared on the screen. Triple buffering did not have any positive effect on the smoothness of the scene. This may be due to the fact that there is no option in the video card driver settings to force buffering to be disabled, and normal deactivation is ignored by the benchmark, and it still uses this function.

If Heaven Benchmark were a game, then at maximum settings (1280x800; AA - 8x; AF - 16x; Tessellation Extreme) it would be uncomfortable to play, since 24 frames is clearly not enough for this. With minimal quality loss (1280×800; AA - 2x; AF - 16x, Tessellation Normal) you can achieve a more acceptable 45 fps.

Modern games use more and more graphic effects and technologies that improve the picture. However, developers usually don’t bother explaining what exactly they are doing. When you don't have the most powerful computer, you have to sacrifice some of the capabilities. Let's try to look at what the most common graphics options mean to better understand how to free up PC resources with minimal impact on graphics.

Anisotropic filtering

When any texture is displayed on the monitor not in its original size, it is necessary to insert additional pixels into it or, conversely, remove the extra ones. To do this, a technique called filtering is used.

Trileneynaya

Anisotropic

Bilinear filtering is the simplest algorithm and requires less computing power, but also produces the worst results. Trilinear adds clarity, but still generates artifacts. Anisotropic filtering is considered the most advanced method for eliminating noticeable distortions on objects that are strongly inclined relative to the camera. Unlike the two previous methods, it successfully combats the gradation effect (when some parts of the texture are blurred more than others, and the boundary between them becomes clearly visible). When using bilinear or trilinear filtering, the texture becomes more and more blurry as the distance increases, but anisotropic filtering does not have this drawback.

Given the amount of data being processed (and there may be many high-resolution 32-bit textures in the scene), anisotropic filtering is especially demanding on memory bandwidth. Traffic can be reduced primarily through texture compression, which is now used everywhere. Previously, when it was not practiced so often, and the throughput of video memory was much lower, anisotropic filtering significantly reduced the number of frames. On modern video cards, it has almost no effect on fps.

Anisotropic filtering has only one setting - filter factor (2x, 4x, 8x, 16x). The higher it is, the clearer and more natural the textures look. Typically, with a high value, small artifacts are visible only on the outermost pixels of tilted textures. Values ​​of 4x and 8x are usually quite enough to get rid of the lion's share of visual distortion. Interestingly, when moving from 8x to 16x, the performance penalty will be quite small even in theory, since additional processing will only be needed for a small number of previously unfiltered pixels.

Shaders

Shaders are small programs that can perform certain manipulations with a 3D scene, for example, changing lighting, applying texture, adding post-processing and other effects.

Shaders are divided into three types: vertex shaders operate with coordinates, geometry shaders can process not only individual vertices, but also entire geometric shapes consisting of a maximum of 6 vertices, pixel shaders work with individual pixels and their parameters .

Shaders are mainly used to create new effects. Without them, the set of operations that developers could use in games is very limited. In other words, adding shaders made it possible to obtain new effects that were not included in the video card by default.

Shaders work very productively in parallel mode, and that is why modern graphics adapters have so many stream processors, which are also called shaders. For example, the GeForce GTX 580 has as many as 512 of them.

Parallax mapping

Parallax mapping is a modified version of the well-known bumpmapping technique, used to add relief to textures. Parallax mapping does not create 3D objects in the usual sense of the word. For example, a floor or wall in a game scene will appear rough while actually being completely flat. The relief effect here is achieved only through manipulation of textures.

The source object does not have to be flat. The method works on various game objects, but its use is desirable only in cases where the height of the surface changes smoothly. Sudden changes are processed incorrectly and artifacts appear on the object.

Parallax mapping significantly saves computer computing resources, since when using analogue objects with an equally detailed 3D structure, the performance of video adapters would not be enough to render scenes in real time.

The effect is most often used on stone pavements, walls, bricks and tiles.

Anti-Aliasing

Before DirectX 8, anti-aliasing in games was done using SuperSampling Anti-Aliasing (SSAA), also known as Full-Scene Anti-Aliasing (FSAA). Its use led to a significant decrease in performance, so with the release of DX8 it was immediately abandoned and replaced with Multisample Anti-Aliasing (MSAA). Despite the fact that this method gave worse results, it was much more productive than its predecessor. Since then, more advanced algorithms have appeared, such as CSAA.

AA off

AA included

Considering that over the past few years the performance of video cards has noticeably increased, both AMD and NVIDIA have again returned support for SSAA technology to their accelerators. However, it will not be possible to use it even now in modern games, since the number of frames/s will be very low. SSAA will be effective only in projects from previous years, or in current ones, but with modest settings for other graphic parameters. AMD has implemented SSAA support only for DX9 games, but in NVIDIA SSAA also functions in DX10 and DX11 modes.

The principle of smoothing is very simple. Before the frame is displayed on the screen, certain information is calculated not in its native resolution, but in an enlarged one and a multiple of two. Then the result is reduced to the required size, and then the “ladder” along the edges of the object becomes less noticeable. The higher the original image and the smoothing factor (2x, 4x, 8x, 16x, 32x), the less jaggies there will be on the models. MSAA, unlike FSAA, smoothes only the edges of objects, which significantly saves video card resources, however, this technique can leave artifacts inside polygons.

Previously, Anti-Aliasing always significantly reduced fps in games, but now it affects the number of frames only slightly, and sometimes has no effect at all.

Tessellation

Using tessellation in a computer model, the number of polygons increases by an arbitrary number of times. To do this, each polygon is divided into several new ones, which are located approximately the same as the original surface. This method allows you to easily increase the detail of simple 3D objects. At the same time, however, the load on the computer will also increase, and in some cases small artifacts cannot be ruled out.

Off

Enabled

At first glance, tessellation can be confused with Parallax mapping. Although these are completely different effects, since tessellation actually changes the geometric shape of an object, and does not just simulate relief. In addition, it can be used for almost any object, while the use of Parallax mapping is very limited.

Tessellation technology has been known in cinema since the 80s, but it began to be supported in games only recently, or rather after graphics accelerators finally reached the required level of performance at which it can be performed in real time.

For the game to use tessellation, it requires a video card that supports DirectX 11.

Vertical Sync

V-Sync is the synchronization of game frames with the vertical scan frequency of the monitor. Its essence lies in the fact that a fully calculated game frame is displayed on the screen at the moment the image is updated on it. It is important that the next frame (if it is already ready) will also appear no later and no earlier than the output of the previous one ends and the next one begins.

If the monitor refresh rate is 60 Hz, and the video card has time to render the 3D scene with at least the same number of frames, then each monitor refresh will display a new frame. In other words, at an interval of 16.66 ms, the user will see a complete update of the game scene on the screen.

It should be understood that when vertical synchronization is enabled, the fps in the game cannot exceed the vertical scan frequency of the monitor. If the number of frames is lower than this value (in our case, less than 60 Hz), then in order to avoid performance losses it is necessary to activate triple buffering, in which frames are calculated in advance and stored in three separate buffers, which allows them to be sent to the screen more often.

The main task of vertical sync is to eliminate the effect of a shifted frame, which occurs when the lower part of the display is filled with one frame, and the upper part with another, shifted relative to the previous one.

Post-processing

This is the general name for all the effects that are superimposed on a ready-made frame of a fully rendered 3D scene (in other words, on a two-dimensional image) to improve the quality of the final picture. Post-processing uses pixel shaders and is used in cases where additional effects require complete information about the entire scene. Such techniques cannot be applied in isolation to individual 3D objects without causing artifacts to appear in the frame.

High dynamic range (HDR)

An effect often used in game scenes with contrasting lighting. If one area of ​​the screen is very bright and another is very dark, a lot of the detail in each area is lost and they look monotonous. HDR adds more gradation to the frame and allows for more detail in the scene. To use it, you usually have to work with a wider range of colors than standard 24-bit precision can provide. Preliminary calculations occur in high precision (64 or 96 bits), and only at the final stage the image is adjusted to 24 bits.

HDR is often used to realize the effect of vision adaptation when a hero in games emerges from a dark tunnel onto a well-lit surface.

Bloom

Bloom is often used in conjunction with HDR, and it also has a fairly close relative - Glow, which is why these three techniques are often confused

.

Bloom simulates the effect that can be seen when shooting very bright scenes with conventional cameras. In the resulting image, the intense light appears to take up more volume than it should and to “climb” onto objects even though it is behind them. When using Bloom, additional artifacts in the form of colored lines may appear on the borders of objects.

Film Grain

Grain is an artifact that occurs in analog TV with a poor signal, on old magnetic videotapes or photographs (in particular, digital images taken in low light). Players often disable this effect because it somewhat spoils the picture rather than improves it. To understand this, you can run Mass Effect in each mode. In some horror films, such as Silent Hill, noise on the screen, on the contrary, adds atmosphere.

Motion Blur

Motion Blur – the effect of blurring the image when the camera moves quickly. It can be successfully used when the scene needs to be given more dynamics and speed, therefore it is especially in demand in racing games. In shooters, the use of blur is not always perceived unambiguously. Proper use of Motion Blur can add a cinematic feel to what's happening on screen.

Switched off

Included

The effect will also help, if necessary, to disguise the low frame rate and add smoothness to the gameplay.

SSAO

Ambient occlusion is a technique used to make a scene photorealistic by creating more believable lighting of the objects in it, which takes into account the presence of other objects nearby with their own characteristics of light absorption and reflection.

Screen Space Ambient Occlusion is a modified version of Ambient Occlusion and also simulates indirect lighting and shading. The appearance of SSAO was due to the fact that, at the current level of GPU performance, Ambient Occlusion could not be used to render scenes in real time. The increased performance in SSAO comes at the cost of lower quality, but even this is enough to improve the realism of the picture.

SSAO works according to a simplified scheme, but it has many advantages: the method does not depend on the complexity of the scene, does not use RAM, can function in dynamic scenes, does not require frame pre-processing and loads only the graphics adapter without consuming CPU resources.

Cel shading

Games with the Cel shading effect began to be made in 2000, and first of all they appeared on consoles. On PCs, this technique became truly popular only a couple of years later, after the release of the acclaimed shooter XIII. With the help of Cel shading, each frame practically turns into a hand-drawn drawing or a fragment from a children's cartoon.

Comics are created in a similar style, so the technique is often used in games related to them. Among the latest well-known releases is the shooter Borderlands, where Cel shading is visible to the naked eye.

Features of the technology are the use of a limited set of colors, as well as the absence of smooth gradients. The name of the effect comes from the word Cel (Celluloid), i.e. the transparent material (film) on which animated films are drawn.

Depth of field

Depth of field is the distance between the near and far edges of space within which all objects will be in focus, while the rest of the scene will be blurred.

To a certain extent, depth of field can be observed simply by focusing on an object close in front of your eyes. Anything behind it will be blurred. The opposite is also true: if you focus on distant objects, everything in front of them will turn out blurry.

You can see the effect of depth of field in an exaggerated form in some photographs. This is the degree of blur that is often attempted to be simulated in 3D scenes.

In games using Depth of field, the gamer usually feels a stronger sense of presence. For example, when looking somewhere through the grass or bushes, he sees only small fragments of the scene in focus, which creates the illusion of presence.

Performance Impact

To find out how enabling certain options affects performance, we used the gaming benchmark Heaven DX11 Benchmark 2.5. All tests were carried out on an Intel Core2 Duo e6300, GeForce GTX460 system at a resolution of 1280x800 pixels (with the exception of vertical synchronization, where the resolution was 1680x1050).

As already mentioned, anisotropic filtering has virtually no effect on the number of frames. The difference between anisotropy disabled and 16x is only 2 frames, so we always recommend setting it to maximum.

Anti-aliasing in Heaven Benchmark reduced fps more significantly than we expected, especially in the heaviest 8x mode. However, since 2x is enough to noticeably improve the picture, we recommend choosing this option if playing at higher levels is uncomfortable.

Tessellation, unlike the previous parameters, can take on an arbitrary value in each individual game. In Heaven Benchmark, the picture without it deteriorates significantly, and at the maximum level, on the contrary, it becomes a little unrealistic. Therefore, you should set intermediate values ​​- moderate or normal.

A higher resolution was chosen for vertical sync so that fps is not limited by the vertical refresh rate of the screen. As expected, the number of frames throughout almost the entire test with synchronization turned on remained firmly at around 20 or 30 fps. This is due to the fact that they are displayed simultaneously with the screen refresh, and with a scanning frequency of 60 Hz this can be done not with every pulse, but only with every second (60/2 = 30 frames/s) or third (60/3 = 20 frames/s). When V-Sync was turned off, the number of frames increased, but characteristic artifacts appeared on the screen. Triple buffering did not have any positive effect on the smoothness of the scene. This may be due to the fact that there is no option in the video card driver settings to force buffering to be disabled, and normal deactivation is ignored by the benchmark, and it still uses this function.

If Heaven Benchmark were a game, then at maximum settings (1280x800; AA – 8x; AF – 16x; Tessellation Extreme) it would be uncomfortable to play, since 24 frames is clearly not enough for this. With minimal quality loss (1280×800; AA – 2x; AF – 16x, Tessellation Normal) you can achieve a more acceptable figure of 45 fps.

I hope this article will not only allow you to better optimize the game for your computer, but also expand your horizons. An article about real influence the number of FPS on the perception of the game.

Texture filtering.

Filtering solves the problem of determining the color of a pixel based on existing texels from a texture image.

The simplest method texture mapping is called point sampling(single point-sampling). Its essence is that for each pixel that makes up the polygon, one texel is selected from the texture image that is closest to the center of the light spot. An error occurs because the color of a pixel is determined by several texels, but only one was selected.

This method is very inaccurate and the result of its use is the appearance of irregularities. Namely, whenever pixels are larger in size than texels, a flickering effect is observed. This effect occurs if part of the polygon is far enough from the observation point that many texels are superimposed on the space occupied by one pixel. Note that if the polygon is very close to the observation point and texels are larger in size than pixels, another type of image quality degradation is observed. IN in this case, the image starts to look blocky. This effect occurs when the texture may be large enough, but the limitation in available screen resolution prevents the original image from being properly represented.

Second method - bilinear filtering(Bi-Linear Filtering) consists of using interpolation technology. To determine the texels that should be used for interpolation, the basic shape of the light spot is used - a circle. Essentially, a circle is approximated by 4 texels. Bilinear filtering is a technique for eliminating image distortions (filtering), such as "blockiness" of textures when they are enlarged. When slowly rotating or moving an object (approaching/moving away), “jumping” of pixels from one place to another may be noticeable, i.e. blockiness appears. To avoid this effect, bilinear filtering is used, which uses a weighted average of the color value of four adjacent texels to determine the color of each pixel and, as a result, determines the color of the overlay texture. The resulting pixel color is determined after three mixing operations are performed: first, the colors of two pairs of texels are mixed, and then the two resulting colors are mixed.

Main disadvantage bilinear filtering is that the approximation is performed correctly only for polygons that are located parallel to the screen or observation point. If the polygon is rotated at an angle (and this is in 99% of cases), the wrong approximation is used, since an ellipse should be approximated.

"Depth aliasing" errors arise from the fact that objects further away from the viewpoint appear smaller on the screen. If an object moves and moves away from the viewing point, the texture image superimposed on the shrinking object becomes more and more compressed. Eventually, the texture image applied to the object becomes so compressed that rendering errors occur. These rendering errors are especially problematic in animation, where such motion artifacts cause flickering and slow-motion effects in parts of the image that should be stationary and stable.

The following rectangles with bilinear texturing can serve as an illustration of the described effect:

Rice. 13.29. Shading an object using the bilinear filtering method. The appearance of "depth-aliasing" artifacts, which result in several squares merging into one.

To avoid errors and simulate the fact that objects at a distance appear less detailed than those closer to the viewing point, a technique known as mip-mapping. In short, mip-mapping is the overlay of textures with different degrees or levels of detail, when, depending on the distance to the observation point, a texture with the required detail is selected.

A mip-texture (mip-map) consists of a set of pre-filtered and scaled images. In an image associated with a mip-map layer, a pixel is represented as the average of four pixels from the previous layer with more high resolution. Hence, the image associated with each mip-texture level is four times smaller in size than the previous mip-map level.

Rice. 13.30. Images associated with each mip-map level of the wavy texture.

From left to right we have mip-map levels 0, 1, 2, etc. The smaller the image gets, the more detail is lost, until near the end when nothing is visible except a blurry blur of gray pixels.

Level of Detail, or simply LOD, is used to determine which mip-map level (or level of detail) should be selected to apply a texture to an object. LOD must correspond to the number of texels overlaid per pixel. For example, if texturing occurs with a ratio close to 1:1, then the LOD will be 0, which means the mip-map level with the highest resolution will be used. If 4 texels overlap one pixel, then the LOD will be 1 and the next mip level with lower resolution will be used. Usually, when moving away from the observation point, the object that deserves the most attention has more high value LOD.

While mip-texturing solves the problem of depth-aliasing errors, its use can cause other artifacts to appear. As the object moves further and further from the observation point, a transition occurs from a low mip-map level to a high one. When an object is in a transition state from one mip-map level to another, a special type of visualization error appears, known as “mip-banding” - banding or lamination, i.e. clearly visible boundaries of transition from one mip-map level to another.

Rice. 13.31. The rectangular tape consists of two triangles textured with a wave-like image, where "mip-banding" artifacts are indicated by red arrows.

The problem of "mip-banding" errors is especially acute in animation, due to the fact that the human eye is very sensitive to displacements and can easily notice the place of a sharp transition between filtering levels when moving around an object.

Trilinear filtering(trilinear filtering) is a third method that removes mip-banding artifacts that occur when mip-texturing is used. With trilinear filtering, to determine the color of a pixel, the average color value of eight texels is taken, four of two adjacent textures are taken, and as a result of seven mixing operations, the pixel color is determined. When using trilinear filtering, it is possible to display a textured object with smooth transitions from one mip level to the next, which is achieved by determining the LOD by interpolating two adjacent mip-map levels. Thus solving most of the problems associated with mip-texturing and errors due to incorrect calculation of scene depth ("depth aliasing").

Rice. 13.32. Pyramid MIP-map

An example of using trilinear filtering is given below. Here again the same rectangle is used, textured with a wave-like image, but with smooth transitions from one mip level to the next through the use of trilinear filtering. Note that there are no noticeable rendering errors.

Rice. 13.33. A rectangle textured with a wave-like image is rendered on the screen using mip-texturing and trilinear filtering.

There are several ways to generate MIP textures. One of them is to simply prepare them in advance using graphics packages like Adobe PhotoShop. Another way is to generate MIP textures on the fly, i.e. during program execution. Pre-prepared MIP textures mean an additional 30% of disk space for textures in the base installation of the game, but allow more flexible methods for managing their creation and allow you to add various effects and additional details different MIP levels.

It turns out that trilinear mipmapping is the best that can be?

Of course not. It can be seen that the problem is not only in the ratio of pixel and texel sizes, but also in the shape of each of them (or, to be more precise, in the ratio of shapes).

The mip-texturing method works best for polygons that are directly face-to-face with the viewpoint. However, polygons that are oblique with respect to the observation point bend the overlay texture so that pixels can be overlaid various types and quadratic in shape areas of the texture image. The mip texturing method does not take this into account and the result is that the texture image is too blurry, as if the wrong texels were used. To solve this problem, you need to sample more of the texels that make up the texture, and you need to select these texels taking into account the "mapped" shape of the pixel in texture space. This method is called anisotropic filtering(“anisotropic filtering”). Normal mip texturing is called "isotropic" (isotropic or uniform) because we are always filtering together square regions of texels. Anisotropic filtering means that the shape of the texel region we use changes depending on the circumstances.

Due to numerous questions and disputes related to FPS in tests for video cards presented on our website, we decided to dwell on this issue in more detail and tell you about game settings.

Everyone knows that in modern games there are enough graphics settings to improve picture quality or improve performance in the game itself. Let's look at the basic settings that are present in almost all games.

Screen resolution

Perhaps this parameter is one of the main ones that affects both the quality of the picture and the performance of the game. This parameter depends solely on the laptop’s matrix and the game’s support for this resolution (from 640x480 to 1920x1080). Everything here is simple and proportional, the higher the resolution, the clearer the picture and the greater the load on the system, and, accordingly, vice versa.

Graphics quality

Almost every game has its own standard graphics settings that you can use. Usually these are “low”, “medium”, “high” and in some games there is an “ultra” column. These settings already contain a set of settings (texture quality, anti-aliasing, anisotropic filtering, shadows... and many others) and the user can select the profile that best suits his PC configuration. I think everything is clear here better setup graphics, the more realistic the game looks, and, of course, the requirements for the device increase. Below you can watch the video and compare the picture quality in all profiles.


Next, we will look in more detail at the settings in the games individually.

Texture quality

This setting is responsible for the resolution of textures in the game. The higher the texture resolution, the clearer and more detailed the picture you see, and accordingly the load on the GPU will be greater.

Shadow quality

This setting adjusts the detail of the shadows. In some games, shadows can be turned off altogether, which will give a significant performance boost, but the picture will not be as rich. At high settings the shadows will be more realistic and soft.

Effect quality

This parameter affects the quality and intensity of effects such as smoke, explosions, shots, dust and many others. IN different games This setting affects differently, in some the difference between low and high settings is very difficult to notice, and in others the differences are obvious. The impact of this parameter on performance depends on the optimization of the effects in the game.

Environmental quality

A parameter responsible for the geometric complexity of frames in objects of the surrounding game world, as well as their detail (the difference is especially noticeable on distant objects). At low settings, there may be a loss of detail in objects (houses, trees, cars, etc.). Distant objects become almost flat, rounded shapes are not quite round, and almost every object loses some small details.

Landscape coverage

In some games it is indicated as “Grass Density” or has other similar names. Responsible for the amount of grass, bushes, branches, stones and other debris on the ground. Accordingly, the higher the parameter, the more saturated the earth looks with different objects.

Anisotropic filtering

When a texture is not rendered at its original size, extra pixels are inserted into it or extra pixels are removed. This is what filtering is used for. There are three types of filtering: bilinear, trilinear and anisotropic. The simplest and least demanding is bilinear filtering, but it also produces the worst results. Trilinear filtering won't give you either good results, although it adds clarity, it also generates artifacts.

The best filtering is anisotropic, which noticeably eliminates distortion on textures that are strongly tilted relative to the camera. For modern video cards, this parameter has virtually no effect on performance, but significantly improves clarity and natural look textures.

Smoothing

The principle of anti-aliasing is as follows: before the image is displayed on the screen, it is calculated not in its native resolution, but in double magnification. During output, the image is reduced to required sizes, and irregularities along the edges of the object become less noticeable. The larger the original image and the smoothing factor (x2, x4, x8, x16), the less unevenness will be noticeable on objects. Actually, the smoothing itself is necessary in order to get rid of the “staircase effect” (teeth along the edges of the texture) as much as possible.

Exist different types anti-aliasing, FSAA and MSAA are most often found in games. Full Screen Anti-Aliasing (FSAA) is used to remove jagged edges from full-screen images. The disadvantage of this anti-aliasing is that the entire image is processed, which of course significantly improves image quality, but requires a lot of computing power from the GPU.

Multisample anti-aliasing (MSAA), unlike FSAA, smoothes only the edges of objects, which leads to a slight deterioration in graphics, but at the same time saves a huge portion of processing power. So unless you have a top-end gaming graphics card, it's best to use MSAA.

SSAO (Screen Space Ambient Occlusion)

Translated into Russian it means “obstruction of ambient light in the screen space.” Is an imitation of global illumination. Increases the realism of the picture, creating more “live” lighting. Gives load only to the GPU. This option significantly reduces the number of FPS on weak graphics adapters.

Motion blur

Also known as Motion Blur. This is an effect that blurs the image when the camera moves quickly. Gives the scene more dynamics and speed (often used in racing). Increases the load on the GPU, thereby reducing the number of FPS.

Depth of field

An effect for creating the illusion of presence by blurring objects depending on their position relative to focus. For example, when talking to a certain character in a game, you see him clearly, but the background is blurry. The same effect can be observed if you concentrate your gaze on an object located nearby; more distant objects will be blurred.

Vertical Sync (V-Sync)

Synchronizes the frame rate in the game with the vertical scan frequency of the monitor. With V-Sync enabled, maximum amount FPS is equal to the monitor's refresh rate. If the number of frames in your game is lower than the monitor’s refresh rate, you should enable triple buffering, in which frames are prepared in advance and stored in three separate buffers. The advantage of vertical sync is that it allows you to get rid of unwanted jerks when there are sudden jumps in FPS.

There are some drawbacks, for example, in new demanding games there may be a significant drop in performance. Also in dynamic shooters or online games, V-Sync can only do harm.

Conclusion

The above outlines the basic, but not all, settings in games. It is worth recalling that each game has its own level of optimization and its own set of settings. In some cases, games with better graphics will run faster on your laptop than unoptimized games with lower requirements. Most games allow you to use both ready-made settings and manually set each individual parameter. Some of the effects discussed above are supported only in new DirectX 11 games, and in older ones with DirectX 9 support they simply are not present.

Performance tests:

And now that we have become familiar with the basic concepts of filtering and texture smoothing, we can move on to practice.

Computer configuration:
Processor: Intel Core 2 Quad Q6600 @ 3200MHz (400x8, 1.3125V)
Video card: Palit Nvidia GeForce 8800GT
Motherboard: Asus P5Q PRO TURBO
Memory: 2x2048MB DDR2 Corsair XMS2 @ 1066MHz, 5-5-5-15
Power supply: Corsair CMPSU-850HXEU 850W
CPU cooler: Zalman CNPS9700 LED
OS: Windows 7 Ultimate x64
Video driver version: Nvidia 195.62 x64

The main subject in our testing today was the very old, but no less famous Counter-Strike: Source, since this is one of the few truly widespread games that provides a huge range of different anti-aliasing and filtering settings. Despite the antiquity of the engine (2004), this game can still load even the most modern platform. Here is such a rich range of settings presented to the user:

Anti-aliasing and filtering tests were carried out in the built-in benchmark, at a resolution of 1280x1024. All other settings were taken as maximum, as in the screenshot above. In order to bring the result as close to the truth as possible, each parameter was tested three times, after which the arithmetic mean of the resulting values ​​was found.

And so, what did we get:

The results were quite unexpected. Coverage sampling technology (CSAA), which by definition should consume less resources than MSAA, shows a completely opposite picture here. There can be a great many reasons for this phenomenon. First of all, it is necessary to take into account that in many respects the performance when turning on anti-aliasing depends on the GPU architecture. And the optimization of various technologies of the game itself and the driver version play an equally important role. Therefore, the results when using other video cards, or even a different driver version, may be completely different.

Tests with anti-aliasing disabled (marked in blue for ease of perception) showed an approximately equal picture, which indicates a slight difference in the load on the video card.

In addition, there is a clear correspondence between the FPS indicators, when using the same anti-aliasing method, for AF 8x and AF 16x. At the same time, the difference ranges from 1 to 4 fps (with the exception of MSAA 8x, where the difference is 11 fps). This suggests that using 16x filtering can be very useful if you need to improve picture quality without a significant impact on performance.

And yet, it is necessary to make a reservation that it is simply unrealistic to get the same FPS values ​​directly in the game, since many scenes turn out to be much more difficult, especially with many players.

Test pictures:

So, what do we have? We learned about the effects of different settings configurations on performance. “But why is all this needed?” - you ask. To improve the quality of the displayed image, I will answer. Is there any such increase at all? To answer this question, I suggest taking a look at the following screenshots:

Billinear/MSAA 2xTrillinear/MSAA 2xAF 2x / MSAA 2x
AF 2x / CSAA 8xAF 2x / MSAA 8xAF 2x / CSAA 16x
AF 2x / CSAA 16xQAF 8x/MSAA x2AF 8x / CSAA 8x
AF 8x / MSAA 8xAF 8x / CSAA 16xAF 8x / CSAA 16xQ
AF 16x / MSAA 2xAF 16x / CSAA 8xAF 16x / MSAA 8x
AF 16x / CSAA 16xAF 16x / CSAA 16xQBillinear/CSAA 16xQ

As you can see, there is simply no significant difference in combinations “above” AF 8x / MSAA 8x (CSAA 8x). But this results in a noticeable hit to performance, especially when using Coverage Sampling AntiAliasing.

Conclusions:

Surely among those reading this article there will be players of Cs:s, HL2 and other games based on the Source engine. They will find this article more interesting and educational than others. However, the purpose of this writing was only to talk about modern technologies that help improve the visual perception of games. And tests are a way to show the stated theory in practice.

Of course, to ensure the reliability of the readings, performance tests should have been carried out both on other video chips and on additional games.

Be that as it may, returning to the topic of this article, everyone chooses what settings to play with. And I will not give advice or recommendations, since they are doomed to failure in advance. I hope the above theory and tests will help you become more familiar with the described technologies.

By Stormcss


Kicking viciously

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”