UV Interpolation for Procedural Mesh

In a world that you create your own procedural mesh which shrinks towards one edge,

uv gets messed up(below image) since the uv coordinate interpolation does not have information about depth.

HLSL, C#

This is problem!

The UV coordinates are linearly interpolated across the triangle.

All modern hardware usually implements perspective correct mapping which should take the depth

of the actual vertex position into account.

This of course only works if the actual positions actually have the same distance and just look smaller due to perspective.

If you actually let the geometry to shrink at one end it's not possible for the hardware to calculate the proper corrected coordinates.


Resource

https://bitlush.com/blog/arbitrary-quadrilaterals-in-opengl-es-2-0

The affine mapping does not have or does not know the depth of the two further away points, So the interpolation happens per triangle in screen space. Since the lower left triangle has a longer physical edge the texture appears larger and is interpolated linearly. The upper right triangle has a smaller edge.

If you want to provide geometry that is somehow distorted you may need to provide uv coordinates with 4 components ("strq" or "stpq") instead of two components ("uv" or "st"). The 4 component UV coordinate basically allows you to specify a scale factor for each UV direction for each vertex. This allows you to specify a shrinking factor which is taken into account during interpolation. Keep in mind that all vertex attributes are linearly interpolated across the triangle during rasterization.

To specify the right UVs you just have to think about specifying the "length" of the edges in the 3rd and 4th coordinate. In general something like this:

The graphics hardware pipeline actually supports a seperate 4x4 texture matrix just for processing texture coordinates.

However those aren't really used nowadays. Usually the texture offset and scale are encoded into the texture matrix.

However since Unity doesn't support rotation it encodes the scale and offset into a single vector4 and just does the scaling and offset "manually".

In most cases that's actually enough and we don't need a matrix multiplication for this.

OpenGL actually has 4 seperate matrices for ("modelview", "projection", "texture" and "color").

See matrix mode. In openGL the modelview is the combination of local2world and camera matrix.

Store 2 more component into uv datatype. Unity now accept 3rd and 4th coordinate of UV into vertex attributes.

... 
private readonly List<Vector4> UV4s = new List<Vector4>();  
...  // loop as many as count of vertices.. 

// ratio is "current edge's length / longest length of the mesh" 
// normalized length 
UV4s.Add(new Vector4(uv_val, 0f * ratio, 1f, ratio));  
UV4s.Add(new Vector4(uv_val, 1f * ratio, 1f, ratio));  
... mesh.SetUVs(0, UV4s); // 
Mesh.SetUVs(int, List<Vector4>) // unity now has overrides for SetUVs to store different type of uv coordinates

Unfortunately none of the built-in shaders actually use homogeneous texture coordinates so this has to be custom shader..

struct appdata_t {     ...     
float4 texcoord : TEXCOORD0; // 4 float type                      ... };  

struct v2f {     
    ...     
    float4 texcoord : TEXCOORD0; // 4 float type      
    ... 
};
  
// in fragment shader.. 
if (i.texcoord .z != 0 && i.texcoord .w != 0)    
    i.texcoord .xy = float2(i.texcoord .xy) / float2(i.texcoord .zw);