C++
Raiting:
40

Planetary landscape


It's hard to argue that the landscape is an integral part of most computer games in open spaces. The traditional method of realizing the change in the relief of the surrounding surface player is the following - take the mesh, which is a plane and for each primitive in this grid, we make a displacement along the normal to this plane by a value specific for this primitive. In simple words, we have a single-channel texture of 256 by 256 pixels and a grid plane. For each primitive from its coordinates on the plane, we take the value from the texture. Now we simply move the coordinates of the primitive along the normal to the plane by the resulting value (Fig. 1)

image
Pic.1 map of heights + plane = terrain

Why does this work? If we imagine that the player is on the surface of a sphere, and the radius of this sphere is extremely large in relation to the size of the player, then the curvature of the surface can be neglected and a plane can be used. But what if we do not neglect the fact that we are on the sphere? I would like to share my experience of constructing such landscapes with the reader in this article.
1. Sector
Obviously, it is not reasonable to build a landscape at once for the whole sphere - most of it will not be visible. Therefore, we need to create a certain minimum area of ​​space - a primitive, from which the relief of the visible part of the sphere will consist. I will name his sector. How can we get it? So, look at Fig. 2a. The green cell is our sector. Next, we construct six grids, each of which is the face of the cube (Fig. 2b). Now let's normalize the coordinates of the primitives that form the grids (Fig. 2c).

image
Pic.2

As a result, we got a cube projected onto the sphere, where the sector is on one of its faces. Why does this work? Consider an arbitrary point on the grid as a vector from the origin. What is the normalization of a vector? This is the transformation of a given vector into a vector in the same direction, but with a unit length. The process is as follows: first we find the length of the vector in the Euclidean metric according to the theorem of Pythagoras

image

Then we divide each of the vector components by this value

image

Now we ask ourselves what is the sphere? A sphere is a set of points equidistant from a given point. The parametric equation of a sphere looks like this

image

where x0, y0, z0 are the coordinates of the center of the sphere, and R is its radius. In our case, the center of the sphere is the origin of coordinates, and the radius is equal to one. We substitute the known values ​​and take the root of the two parts of the equation. The result is the following

image

Literally the last transformation tells us the following: "In order to belong to the sphere, the length of the vector must be equal to one". This we have achieved by normalization.

And what if the sphere has an arbitrary center and radius? Find the point that belongs to it, using the following equation

image

where pS is the point on the sphere, C is the center of the sphere, pNorm is the previously normalized vector, and R is the radius of the sphere. In simple words, the following occurs here: "We move from the center of the sphere towards the point on the grid by a distance R". Since each vector has a unit length, then in the end all points are equidistant from the center of the sphere by the distance of its radius, which makes the sphere equation true.
2. Managing
We need to get a group of sectors that are potentially visible from the viewpoint. But how to do that? Suppose that we have a sphere with a center at some point. We also have a sector that is located on the sphere and point P, located in the space near the sphere. Now we construct two vectors - one directed from the center of the sphere to the center of the sector, the other from the center of the sphere to the viewpoint. Look at Figure 3 - the sector can be seen only if the absolute value of the angle between these vectors is less than 90 degrees.

image
Pic.3 a - angle less than 90 - the sector is potentially visible. b - an angle larger than 90 - the sector is not visible

How to get this angle? To do this, we need to use the scalar product of the vectors. For the three-dimensional case, it is calculated as follows:

image

A scalar product has a distributive property:

image

Earlier we defined the equation of the length of the vector - now we can say that the length of the vector is equal to the root of the scalar product of this vector by itself. Or vice versa - the scalar product of the vector itself is equal to the square of its length.

Now let's look at the law of cosines. One of its two formulations looks like this (Fig. 4):

image

image
Fig.4 the law of cosines

If we take the lengths of our vectors for a and b, then the angle alfa is what we are looking for. But how do we get the value of c? See: if we take a away from b, we get a vector directed from a to b, and since the vector is characterized by only the direction and length, we can graphically arrange its beginning at the end of the vector a. On this basis, we can say that c is equal to the length of the vector b - a. So, at us it has turned out

image

express the squares of lengths as scalar products

image

open parentheses using the distributive property

image

slightly shorten

image

and finally, dividing both of the two equations by minus two, we obtain

image

This is another property of the scalar product. In our case, we need to normalize the vectors so that their lengths are equal to one. We do not need to calculate the angle - enough cosine value. If it is less than zero - then we can safely say that this sector does not interest us
3. Grid
It's time to think about how to draw primitives. As I said earlier, the sector is the main component in our scheme, therefore for each potentially visible sector we will draw a grid, the primitives of which will form the landscape. Each of its cells can be displayed using two triangles. Due to the fact that each cell has adjacent faces, the values ​​of most vertices of triangles are repeated for two or more cells. In order not to duplicate the data in the vertex buffer, fill the index buffer. If indices are used, then with their help the graphic pipeline determines which primitive in the vertex buffer it should handle. (рис.5) The topology I selected is triangle list (D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)

image
Fig. 5 Visual display of indices and primitives

To create for each sector a separate vertex buffer is too expensive. It is much more efficient to use one buffer with coordinates in grid space, that is, x is a column, and y is a string. But how do we get a point on the sphere? The sector is a square area with a start at a certain point S. All sectors have the same face length - let's call it SLen. The grid covers the entire area of ​​the sector and also has the same number of rows and columns, so to find the length of the cell edge, we can construct the following equation

image

where CLen is the length of the cell edge, MSize is the number of rows or columns of the grid. Divide both parts into MSize and get CLen

image

We go further. The space of the face of the cube to which the sector belongs can be expressed as a linear combination of two vectors of unit length - we call them V1 and V2. We have the following equation (see Fig. 6)

image

image
Fig.6 Visualization of the formation of a point on a grid

To obtain a point on the sphere, we use the equation derived earlier

image
4. Height
All that we have achieved by this time, little like the landscape. It's time to add something that will make it so - the difference in heights. Let us imagine that we have a sphere of unit radius centered at the origin, and also a set of points {P0, P1, P2 ... PN} that are located on this sphere. Each of these points can be represented as a unit vector from the origin. Now imagine that we have a set of values, each of which is the length of a particular vector (Fig. 7).

image

I will store these values ​​in a two-dimensional texture. We need to find the relationship between the coordinates of the texture pixel and the vector-point on the sphere. Let's get started.

In addition to Cartesian, a point on a sphere can also be described by means of a spherical coordinate system. In this case, its coordinates will consist of three elements: azimuth angle, polar angle and the shortest distance from the origin to the point. The azimuth angle is the angle between the X axis and the projection of the ray from the origin to the point on the XZ plane. It can take values ​​from zero to 360 degrees. The polar angle is the angle between the Y axis and the ray from the origin to the point. It can also be called zenith or normal. It takes values ​​from zero to 180 degrees. (see Fig.8)

image
Fig.8 Spherical coordinates

For the transition from a Cartesian system to a spherical system, I use the following equations (I assume that the Y axis is directed upwards):

image

where d is the distance to the point, a is the polar angle, and b is the azimuth angle. The parameter d can also be described as "the length of the vector from the origin to the point" (which is clear from the equation). If we use normalized coordinates, we can avoid division by finding the polar angle. Actually, why do we need these corners? Dividing each of them into its maximum range, we get the coefficients from zero to one and with their help we make a selection from the texture in the shader. When obtaining the coefficient for the polar angle, it is necessary to take into account the quarter in which the angle is located. "But the value of the expression z / x is not defined for x equal to zero," you will say. I'll even say - for z equal to zero the angle will be zero regardless of the value of x.



image
Pic.9 The problem of choosing between 0 and 1 for texture coordinates

Here is the code for obtaining texture coordinates from spherical coordinates. Pay attention - because of the error in the calculations we can not check the values ​​of the normal to zero, instead we have to compare their absolute values ​​with a certain threshold value (for example 0.001)

// norm - the normalized coordinates of the point for which we get the texture coordinates
// offset - the normalized coordinates of the center of the sector to which norm belongs
// zeroTreshold - threshold value (0.001)
float2 GetTexCoords(float3 norm, float3 offset)
{
float tX = 0.0f, tY = 0.0f;

bool normXIsZero = abs(norm.x) < zeroTreshold;
bool normZIsZero = abs(norm.z) < zeroTreshold;

if(normXIsZero || normZIsZero){

if(normXIsZero && norm.z > 0.0f)
tX = 0.25f;
else if(norm.x < 0.0f && normZIsZero)
tX = 0.5f;
else if(normXIsZero && norm.z < 0.0f)
tX = 0.75f;
else if(norm.x > 0.0f && normZIsZero){

if(dot(float3(0.0f, 0.0f, 1.0f), offset) < 0.0f)
tX = 1.0f;
else
tX = 0.0f;
}

}else{

tX = atan(norm.z / norm.x);

if(norm.x < 0.0f && norm.z > 0.0f)
tX += 3.141592;
else if(norm.x < 0.0f && norm.z < 0.0f)
tX += 3.141592;
else if(norm.x > 0.0f && norm.z < 0.0f)
tX = 3.141592 * 2.0f + tX;

tX = tX / (3.141592 * 2.0f);
}

tY = acos(norm.y) / 3.141592;

return float2(tX, tY);
}
give an intermediate version of the vertex shader

// startPos - the beginning of the cube face
// vec1, vec2 - direction vectors of the cube face
// gridStep - cell size
// sideSize is the length of the sector edge
// GetTexCoords () - converts spherical coordinates to textured

VOut ProcessVertex(VIn input)
{
float3 planePos = startPos + vec1 * input.netPos.x * gridStep.x
+ vec2 * input.netPos.y * gridStep.y;

float3 sphPos = normalize(planePos);

float3 normOffset = normalize(startPos + (vec1 + vec2) * sideSize * 0.5f);

float2 tc = GetTexCoords(sphPos, normOffset);

float height = mainHeightTex.SampleLevel(mainHeightTexSampler, tc, 0).x;

posL = sphPos * (sphereRadius + height);

VOut output;
output.posH = mul(float4(posL, 1.0f), worldViewProj);
output.texCoords = tc;

return output;
}
5. Lighting
To realize the dependence of the color of the landscape on lighting, we use the following equation:

image

Where I is the color of the point, Ld is the color of the light source, Kd is the color of the material of the illuminated surface, and a is the angle between the vector to the source and the normal to the illuminated surface. This is a special case of Lambert's law of cosines. Let's see what's here and why. By multiplying Ld by Kd, we mean the componentwise multiplication of colors, that is (Ld.r * Kd.r, Ld.g * Kd.g, Ld.b * Kd.b). It may be easier to understand the meaning if we imagine the following situation: suppose we want to light an object with a green light source, so we expect the object's color to be in the gradation of green. The result (0 * Kd.r, 1 * Kd.g, 0 * Kd.b) gives (0, Kd.g, 0) - exactly what we need. We go further. As it was announced earlier, the cosine of the angle between the normalized eyelids is their scalar product. Let's consider its maximum and minimum value from our point of view. If the cosine of the angle between the vectors is 1, then this angle is equal to 0 - hence, both vectors are collinear (lie on one line).

The same is true for the cosine value -1, only in this case the vectors point in opposite directions. It turns out that the closer the normal vector and the vector to the light source to the collinearity state, the higher the lightness coefficient of the surface to which the normal belongs. It is also assumed that the surface can not be illuminated if its normal points in the opposite direction to the source direction - that's why I only use positive cosine values.

I use a parallel source, so its position can be neglected. The only thing that needs to be considered is that we use a vector on a light source. That is, if the direction of the rays (1.0, -1.0, 0) - we need to use the vector (-1.0, 1.0, 0). The only thing that is difficult for us is the normal vector. Calculate the normal to the plane simply - we need to produce a vector product of two vectors that describe it. It is important to remember that the vector product is anticommutative - one must take into account the order of the factors. In our case, get the normal to the triangle, knowing the coordinates of its vertices in the grid space, can be as follows (Note that I do not take into account the boundary cases for p.x and p.y)

float3 p1 = GetPosOnSphere(p);
float3 p2 = GetPosOnSphere(float2(p.x + 1, p.y));
float3 p3 = GetPosOnSphere(float2(p.x, p.y + 1));

float3 v1 = p2 - p1;
float3 v2 = p3 - p1;

float3 n = normalzie(cross(v1, v2));
But that is not all. Most of the vertices of the grid belong to four planes at once. To obtain an acceptable result, we must calculate the averaged normal as follows:

Na = normalize(n0 + n1 + n2 + n3)
Implement this method on the GPU quite costly - we need two steps to calculate the normals and average them. In addition, efficiency leaves much to be desired. Based on this, I chose another way - to use the normal map. (Fig.10)

image
Pic.10 Normal map

The principle of working with it is the same as with a map of heights - we transform the spherical coordinates of the top of the grid into texture ones and make a selection. But we can not use this data directly, because we are working with the sphere, and the vertex has its own normal, which must be taken into account. Therefore, we will use the normal map data as the TBN basis coordinates. What is a basis? Here's an example for you. Imagine that you are an astronaut and are sitting on a lighthouse somewhere in space. You from the MCC receive a message: "You need to move from the beacon 1 meter to the left, 2 meters up and 3 meters forward." How can this be expressed mathematically? (1, 0, 0) * 1 + (0, 1, 0) * 2 + (0, 0, 1) * 3 = (1,2,3). In the matrix form, this equation can be expressed as follows:

image

Now imagine that you are also sitting on the lighthouse, only now you are told from the MCC: "We sent you vectors of directions - you should advance 1 meter in the first vector, 2 meters in the second and 3 in the third." The equation for the new coordinates is as follows:

image

componentwise entry looks like this:

image

Or in the matrix form:

image

so, the matrix with the vectors V1, V2 and V3 is the basis, and the vector (1,2,3) are the coordinates in the space of this basis.

Imagine now that you have a set of vectors (basis M) and you know where you are relative to the beacon (point P). You need to know your coordinates in the space of this basis - how much you need to move forward on these vectors to be in the same place. We represent the required coordinates (X)

image

If P, M and X are numbers, we would simply separate both sides of the equation by M, but alas ... Let's go the other way - according to the property of the inverse matrix

image

where I is the identity matrix. In our case, it looks like this

image

What does this give us? Try multiplying this matrix by X and you get

image

We also need to clarify that multiplication of matrices has the associativity property

image

We can legitimately consider a vector as a 3-by-1 matrix

Given all of the above, we can conclude that to get X on the right side of the equation, we need to multiply both parts by the inverse M matrix in the correct order

image

This result will be needed later.

Now back to our problem. I will use an orthonormal basis - this means that V1, V2 and V3 are orthogonal with respect to each other (form an angle of 90 degrees) and have a unit length. As V1 will act tangent vector, V2 - bitangent vector, V3 - normal. In the traditional transpose of DirectX, the matrix looks like this:

image

where T is the tangent vector, B is the bitangent vector, and N is the normal. Let's find them. With the normal, it's the easiest-in fact, it's the normalized coordinates of a point. The Bitangent vector is equal to the vector product of the normal and the tangent vector. The most difficult thing is with the tangent vector. It is equal to the direction of the tangent to the circle at the point. Let's analyze this moment. First we find the coordinates of a point on the unit circle in the plane XZ for some angle a

image

The direction of the tangent to the circle at this point can be found in two ways. The vector to the point on the circle and the tangent vector are orthogonal - therefore, since the functions sin and cos are periodic - we can simply add pi / 2 to the angle a and get the desired direction. According to the bias property of pi / 2:

image

we have the following vector:

image

We can also use differentiation - see Appendix 3 for more details. So, in Figure 11, you can see a sphere, for each vertex a basis is constructed. Blue vectors denote normals, red - tangent vectors, green - bitangent vectors.

image
Pic.11 A sphere with TBN bases at each vertex. Red - tangent vectors, green - bitangent vectors, blue vectors - normal

With the basis understood - now let's get a map of normals. To do this, we use the Sobel filter. The Sobel filter calculates the gradient of the brightness of the image at each point (roughly speaking, the vector of brightness variation). The principle of the filter is that it is necessary to apply a certain matrix of values, which is called the "Core", to each pixel and its neighbors within the dimensionality of this matrix. Suppose that we process the pixel P with the K kernel. If it is not on the image boundary, then it has eight neighbors - the upper left, upper, upper right, and so on. We call them tl, t, tb, l, r, bl, b, br. So, the use of the K kernel to this pixel is as follows:

Pn = tl * K(0, 0) + t * K(0,1) + tb * K(0,2) +
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspl * K(1, 0) + P * K(1,1) + r * K(1,2) +
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspbl * K(2, 0) + b * K(2,1) + br * K(2,2)

This process is called "Convolution". The Sobel filter uses two cores to compute the gradient vertically and horizontally. We designate them as Kx and Ku:

image

The basis is - you can start implementing. First, we need to calculate the brightness of the pixel. I use the transformation from the RGB color model to the YUV model for the PAL system:

image

But since our image is initially in grayscale, then this step can be skipped. Now we need to "collapse" the original image with Kx and Ky cores. So we get the components of the X and Y gradient. Also the value of the normal of this vector can be very useful - we will not use it, but the selections containing the normalized values ​​of the gradient normal have several useful applications. By normalization, I mean the following equation

image

where V is the value that is normalized, Vmin and Vmax are the range of these values. In our case, the minimum and maximum values ​​are monitored during the generation process. Here is an example of a Sobel filter implementation:

float SobelFilter::GetGrayscaleData(const Point2 &Coords)
{
Point2 coords;
coords.x = Math::Saturate(Coords.x, RangeI(0, image.size.width - 1));
coords.y = Math::Saturate(Coords.y, RangeI(0, image.size.height - 1));

int32_t offset = (coords.y * image.size.width + coords.x) * image.pixelSize;

const uint8_t *pixel = &image.pixels[offset];

return (image.pixelFormat == PXL_FMT_R8) ? pixel[0] : (0.30f * pixel[0] + //R
0.59f * pixel[1] + //G
0.11f * pixel[2]); //B
}

void SobelFilter::Process()
{
RangeF dirXVr, dirYVr, magNormVr;

for(int32_t y = 0; y < image.size.height; y++)
for(int32_t x = 0; x < image.size.width; x++){

float tl = GetGrayscaleData({x - 1, y - 1});
float t = GetGrayscaleData({x , y - 1});
float tr = GetGrayscaleData({x + 1, y - 1});

float l = GetGrayscaleData({x - 1, y });
float r = GetGrayscaleData({x + 1, y });

float bl = GetGrayscaleData({x - 1, y + 1});
float b = GetGrayscaleData({x , y + 1});
float br = GetGrayscaleData({x + 1, y + 1});

float dirX = -1.0f * tl + 0.0f + 1.0f * tr +
-2.0f * l + 0.0f + 2.0f * r +
-1.0f * bl + 0.0f + 1.0f * br;

float dirY = -1.0f * tl + -2.0f * t + -1.0f * tr +
0.0f + 0.0f + 0.0f +
1.0f * bl + 2.0f * b + 1.0f * br;

float magNorm = sqrtf(dirX * dirX + dirY * dirY);

int32_t ind = y * image.size.width + x;

dirXData[ind] = dirX;
dirYData[ind] = dirY;
magNData[ind] = magNorm;

dirXVr.Update(dirX);
dirYVr.Update(dirY);
magNormVr.Update(magNorm);
}

if(normaliseDirections){
for(float &dirX : dirXData)
dirX = (dirX - dirXVr.minVal) / (dirXVr.maxVal - dirXVr.minVal);

for(float &dirY : dirYData)
dirY = (dirY - dirYVr.minVal) / (dirYVr.maxVal - dirYVr.minVal);
}

for(float &magNorm : magNData)
magNorm = (magNorm - magNormVr.minVal) / (magNormVr.maxVal - magNormVr.minVal);
}
It must be said that the Sobel filter has the property of linear separability, so this method can be optimized.

The difficult part is over - it remains to write the X and Y coordinates of the direction of the gradient in the R and G channels of the normal map pixels. For the Z coordinate, I use a unit. I also use a three-dimensional coefficient vector to adjust these values. Here is an example of generating a normal map with comments:

// ImageProcessing :: ImageData Image - the original image. The structure contains the pixel format and image data

ImageProcessing::SobelFilter sobelFilter;

sobelFilter.Init(Image);
sobelFilter.NormaliseDirections() = false;

sobelFilter.Process();

const auto &resX =sobelFilter.GetFilteredData(ImageProcessing::SobelFilter::SOBEL_DIR_X);
const auto &resY =sobelFilter.GetFilteredData(ImageProcessing::SobelFilter::SOBEL_DIR_Y);

ImageProcessing::ImageData destImage = {DXGI_FORMAT_R8G8B8A8_UNORM, Image.size};

size_t dstImageSize = Image.size.width * Image.size.height * destImage.pixelSize;

std::vector<uint8_t> dstImgPixels(dstImageSize);

for(int32_t d = 0 ; d < resX.size(); d++){

    // use the vector of tuning coefficients. At me it is equal (0.03, 0.03, 1.0)
Vector3 norm = Vector3::Normalize({resX[d] * NormalScalling.x,
resY[d] * NormalScalling.y,
1.0f * NormalScalling.z});

Point2 coords(d % Image.size.width, d / Image.size.width);

int32_t offset = (coords.y * Image.size.width + coords.x) * destImage.pixelSize;

uint8_t *pixel = &dstImgPixels[offset];

    // translate the values ​​from [-1.0, 1.0] to [0.0, 1.0] and then to the area [0, 256]
pixel[0] = (0.5f + norm.x * 0.5f) * 255.999f;
pixel[1] = (0.5f + norm.y * 0.5f) * 255.999f;
pixel[2] = (0.5f + norm.z * 0.5f) * 255.999f;
}

destImage.pixels = &dstImgPixels[0];

SaveImage(destImage, OutFilePath);
Now I will give an example of using the normal map in the shader:

// texCoords - textural coordinates that we got in the way described in item 4
// normalL - vertex normal
// lightDir - vector on the light source
// Ld - the color of the light source
// Kd - the color of the material of the illuminated surface

float4 normColor = mainNormalTex.SampleLevel(mainNormalTexSampler, texCoords, 0);

// translate the value from [0.0, 1.0] to [-1.0, 1.0] and normalize the result
float3 normalT = normalize(2.0f * mainNormColor.rgb - 1.0f);

// translate the texture coordinate X from the area [0.0, 1.0] to [0.0, Pi * 2.0]
float ang = texCoords.x * 3.141592f * 2.0f;

float3 tangent;
tangent.x = -sin(ang);
tangent.y = 0.0f;
tangent.z = cos(ang);

float3 bitangent = normalize(cross(normalL, tangent));

float3x3 tbn = float3x3(tangent, bitangent, normalL);

float3 resNormal = mul(normalT, tbn);

float diff = saturate(dot(resNormal, lightDir.xyz));

float4 resColor = Ld * Kd * diff;
6. Level Of Detail
Well, now our landscape is illuminated! You can fly to the moon - climb a height map, set the color of the material, load the sectors, set the grid size to {16, 16} and ... Yes, something is not enough - I'll put {256, 256} - oh, , and why high detail on the far sectors? By the same token, the closer the observer to the planet, the fewer sectors he can see. Yes ... we still have a lot of work to do! Let's first figure out how to cut off excess sectors. The determining value here is the height of the observer from the surface of the planet - the higher it is, the more sectors he can see (Fig. 12)

image
Fig.12 Dependence of the height of the observer on the number of sectors to be processed

We find the height in the following way: we build a vector from the position of the observer to the center of the sphere, calculate its length and subtract from it the value of the radius of the sphere. Earlier I said that if the scalar product of the vector by the observer and the vector to the center of the sector is less than zero, then this sector does not interest us - now instead of zero we will use a value linearly dependent on the height. First, let's define the variables - so we will have the minimum and maximum values ​​of the scalar product and the minimum and maximum value of the height. We construct the following system of equations

image

Now we express A in the second equation

image

substitute A from the second equation into the first

image

we express B from the first equation

image

substitute B from the first equation into the second

image

Now substitute the variables in the function

image

and we get

image

Where Hmin and Hmax are the minimum and maximum height values, Dmin and Dmax are the minimum and maximum values ​​of the scalar product. This problem can be solved differently - see Appendix 4.

Now we need to understand the levels of detail. Each of them will determine the range of the value of the scalar product. In the pseudo-code, the process of determining the sector's belonging to a certain level looks like this:

loop for all sectors

    we calculate the scalar product of the vector by the observer and the vector by the center of the sector

    If the scalar product is less than the minimum threshold calculated earlier
        go to the next sector

    cycle by level of detail
        if the scalar product is within the limits defined for this
            add the sector to this level

    end of cycle in detail levels

end of cycle across all sectors
We need to calculate the value range for each level. First, we construct a system of two equations

image

deciding it, we get

image

Using these coefficients, we define the function

image

where Rmax is the range of the scalar product value (D (H) - Dmin), Rmin is the minimum region determined by the level. I'm using the value 0.01. Now we need to take the result from Dmax

image

With this function, we get the areas for all levels. Here is an example:

const float dotArea = dotRange.maxVal - dotRange.minVal;
const float Rmax = dotArea, Rmin = 0.01f;
float lodsCnt = lods.size();

float A = Rmax;
float B = powf(Rmin / Rmax, 1.0f / (lodsCnt - 1.0f));

for(size_t g = 0; g < lods.size(); g++){

lods[g].dotRange.minVal = dotRange.maxVal - A * powf(B, g);
lods[g].dotRange.maxVal = dotRange.maxVal - A * powf(B, g + 1);
}

lods[lods.size() - 1].dotRange.maxVal = 1.0f;
Now we can determine to what level of detail the sector belongs (Fig. 13).

image
Fig.13 Color differentiation of sectors according to the level of detail

Next, you need to understand the size of the mesh. Keep for each level your grid will be very expensive - it is much more effective to change the detail of one grid on the fly using tessellation. For this we need, in addition to the vertex and pixel boundaries, also to implement hull and domain shaders. In the Hull shader, the main task is to prepare control points. It consists of two parts - the main function and the function that calculates the parameters of the control point. You must specify values ​​for the following attributes:
domain
partitioning
outputtopology
outputcontrolpoints
patchconstantfunc
Here is an example of a Hull shader for a triangle delineation:

struct PatchData
{
float edges[3] : SV_TessFactor;
float inside : SV_InsideTessFactor;
};

PatchData GetPatchData(InputPatch<VIn, 3> Patch, uint PatchId : SV_PrimitiveID)
{
PatchData output;

flloat tessFactor = 2.0f;

output.edges[0] = tessFactor;
output.edges[1] = tessFactor;
output.edges[2] = tessFactor;

output.inside = tessFactor;

return output;
}

[domain("tri")]
[partitioning("integer")]
[outputtopology("triangle_cw")]
[outputcontrolpoints(3)]
[patchconstantfunc("GetPatchData")]

VIn ProcessHull(InputPatch<VIn, 3> Patch,
uint PointId : SV_OutputControlPointID,
uint PatchId : SV_PrimitiveID)
{
return Patch[PointId];
}
You see, the main work is done in GetPatchData (). Its task is to establish a tessellation factor. About it we'll talk later, now we'll move on to the Domain shader. It gets control points from the Hull shader and coordinates from the tessellator. The new value of the position or texture coordinates in the case of triangulation should be calculated using the following formula

N = C1 * F.x + C2 * F.y + C3 * F.z

where C1, C2 and C3 are control points, F is the coordinates of the tessellator. Also in the Domain shader, you need to set the domain attribute, whose value corresponds to the one specified in the Hull shader. Here is an example of a Domain shader:

cbuffer buff0 : register(b0)
{
matrix worldViewProj;
}

struct PatchData
{
float edges[3] : SV_TessFactor;
float inside : SV_InsideTessFactor;
};

[domain("quad")]
PIn ProcessDomain(PatchData Patch,
float3 Coord : SV_DomainLocation,
const OutputPatch<VIn, 3> Tri)
{

float3 posL = Tri[0].posL * Coord.x +
Tri[1].posL * Coord.y +
Tri[2].posL * Coord.z;

float2 texCoords = Tri[0].texCoords * Coord.x +
Tri[1].texCoords * Coord.y +
Tri[2].texCoords * Coord.z;

PIn output;
output.posH = mul(float4(posL, 1.0f), worldViewProj);
output.normalW = Tri[0].normalW;
output.texCoords = texCoords;

return output;
}
The role of the vertex shader in this case is minimized - for me it simply "pushes" the data to the next stage.

Now we need to implement something like this. Our primary task is to calculate the tessellation factor, or more precisely, to build its dependence on the height of the observer. We again construct the system of equations

image

solving it in the same way as before, we get

image

where Tmin and Tmax are the minimum and maximum tessellation coefficients, Hmin and Hmax are the minimum and maximum values ​​of the observer's height. The minimum tessellation coefficient for me is one. The maximum is set separately for each level
(e.g. 1, 2, 4, 16).

In the future, it will be necessary for us that the growth of the factor is limited to the nearest degree of two. that is, for values ​​from two to three we set the value to two, for values ​​from 4 to 7, we set 4, with values ​​from 8 to 15, the factor will be 8, and so on. Let's solve this problem for factor 6. First we solve the following equation

image

let's take the decimal logarithm from two parts of equation

image

according to the property of logarithms, we can rewrite the equation as follows

image

Now it remains for us to divide both parts into log (2)

image

But that is not all. X is approximately 2.58. Next, you need to reset the fractional part and raise the deuce to the power of the resulting number. Here is the code for calculating tessellation factors for detail levels

float h = camera->GetHeight();
const RangeF &hR = heightRange;

for(LodsStorage::Lod &lod : lods){

//derived from system
//A + B * Hmax = Lmin
//A + B * Hmin = Lmax
//and getting A then substitution B in second equality

float mTf = (float)lod.GetMaxTessFactor();

float tessFactor = 1.0f + (mTf - 1.0f) * ((h - hR.maxVal) / (hR.minVal - hR.maxVal));
tessFactor = Math::Saturate(tessFactor, RangeF(1.0f, mTf));

float nearPowOfTwo = pow(2.0f, floor(log(tessFactor) / log(2)));
lod.SetTessFactor(nearPowOfTwo);
}
7. Noise
Let's see how you can increase the detail of the terrain, without changing the size of the altitude map. The following thing comes to mind: change the value of the height to the value obtained from the texture of the gradient noise. The coordinates for which we will sample will be N times the main ones. The sample will use the mirrored addressing type (D3D11_TEXTURE_ADDRESS_MIRROR) (see Figure 14).

image
Fig.14 A sphere with a map of heights + a sphere with a noise map = a sphere with a total height

In this case, the height will be calculated as follows:

// float2 tc1 - texture coordinates obtained from the normalized point, as it was
// told before
// texCoordsScale is the texture coordinate multiplier. In my case is equal to 300
// mainHeightTex, mainHeightTexSampler - height map texture
// distHeightTex, distHeightTexSampler - gradient noise texture
// maxTerrainHeight - maximum height of the terrain. In my case, 0.03

float2 tc2 = tc1 * texCoordsScale;

float4 mainHeighTexColor = mainHeightTex.SampleLevel(mainHeightTexSampler, tc1, 0);
float4 distHeighTexColor = distHeightTex.SampleLevel(distHeightTexSampler, tc2, 0);

float height = (mainHeighTexColor.x + distHeighTexColor.x) * maxTerrainHeight;
For the time being, the periodic nature is expressed significantly, but with the addition of lighting and texturing the situation will change for the better. And what is the texture of the gradient noise? Roughly speaking, this is a lattice of random values. Let's figure out how to match the dimensions of the lattice to the size of the texture. Suppose we want to create a noise texture of 256 by 256 pixels. It's simple, if the dimensions of the grid match with the size of the texture - we will get something like white noise in the TV. But what if our grating has dimensions, say, 2 by 2? The answer is simple - use interpolation. One of the formulations of linear interpolation looks like this:

image

This is the fastest, but at the same time, the least suitable option for us. It is better to use interpolation based on the cosine:

image

But we can not just
KlauS 6 october 2017, 11:22
Vote for this post
Bring it to the Main Page
 

Comments

Leave a Reply

B
I
U
S
Help
Avaible tags
  • <b>...</b>highlighting important text on the page in bold
  • <i>..</i>highlighting important text on the page in italic
  • <u>...</u>allocated with tag <u> text shownas underlined
  • <s>...</s>allocated with tag <s> text shown as strikethrough
  • <sup>...</sup>, <sub>...</sub>text in the tag <sup> appears as a superscript, <sub> - subscript
  • <blockquote>...</blockquote>For  highlight citation, use the tag <blockquote>
  • <code lang="lang">...</code>highlighting the program code (supported by bash, cpp, cs, css, xml, html, java, javascript, lisp, lua, php, perl, python, ruby, sql, scala, text)
  • <a href="http://...">...</a>link, specify the desired Internet address in the href attribute
  • <img src="http://..." alt="text" />specify the full path of image in the src attribute