Vladimir Chavdarov
Tools and Gameplay Programmer
Vladimir Chavdarov
Tools and Gameplay Programmer
My original blog post from 2023: link to Hashnode
I have always had a soft spot for RTS games and in the past 6 weeks, I had the opportunity to develop a tool similar to WarCraft III's World Editor from 2002. The video above shows the main features of my Map Maker. I managed to make the main terraforming brushes needed for every terrain editor as well as brushes for a secondary texture, usually serving as highlight for gameplay or aesthetic reasons. In this blog post, I will explain the algorithms used in the creation of the tool's features as well as some of the challenges I went through while developing it. All of this is built on Bee, an engine developed by the CMGT lecturers at BUas.
One of the first things I had to do was setup the camera controls. I wanted something that gives a lot of movement freedom and is also intuitive to use in the context of terrain editing. Since I'm not a UX designer, I took inspiration from WarCraft III's World Editor (as I did with many other things). Translation is done by holding down the Right Mouse Button while rotation requires to hold down both the Right Mouse Button and the Ctrl key.
EnTT (a popular ECS) is implemented in Bee and the engine provides us with neat Camera and Transform components ready to use. First, I initialize an entity to represent my camera like so:
{ // Camera
const auto entity = Engine.ECS().CreateEntity();
auto& transform = Engine.ECS().CreateComponent<Transform>(entity);
transform.Name = "Camera";
auto projection = glm::perspective(glm::radians(60.0f), 1.77f, 0.2f, 500.0f);
static float rotate = glm::radians(-90.0f);
glm::vec3 eye = vec3(0.0f, -0.00001f, 4.0f);
glm::vec3 center = vec3(0.0f, 0.0f, 0.0f);
glm::vec3 up = vec3(0.0f, 0.0f, 1.0f);
auto view = glm::lookAt(eye, center, up);
auto& camera = Engine.ECS().CreateComponent<Camera>(entity);
camera.Projection = projection;
camera.Yaw = -90.0f;
camera.Pitch = 0.0f;
const auto viewInv = inverse(view);
Decompose(viewInv, transform.Translation, transform.Scale, transform.Rotation);
}
Before we start translating, we need to make sure that the camera's Front and Right vectors are properly calculated. LearnOpenGL explains in detail how these calculations are done and why, I definitely did not invent the wheel.
If we skip this step and try to move the camera "forward", it would simply go along the Y axis (or to the "North"). We calculate the vectors whenever the rotation of the camera changes:
vec3 rotation_euler = glm::eulerAngles(transform.Rotation);
if (input.GetMouseButton(Input::MouseButton::Right))
{
vec2 mouse_offset = mouse_pos_clicked - input.GetMousePosition();
rotation_euler.x += mouse_offset.y * rotate_step; // pitch
rotation_euler.z += mouse_offset.x * rotate_step; // yaw
}
transform.Rotation = glm::quat(rotation_euler);
For the rotation itself, I decided to go with Euler angles on the user's side. Then, I do an Euler-to-quaternion conversion to avoid Gimbal Lock.
vec3 rotation_euler = glm::eulerAngles(transform.Rotation);
if (input.GetMouseButton(Input::MouseButton::Right))
{
vec2 mouse_offset = mouse_pos_clicked - input.GetMousePosition();
rotation_euler.x += mouse_offset.y * rotate_step; // pitch
rotation_euler.z += mouse_offset.x * rotate_step; // yaw
}
transform.Rotation = glm::quat(rotation_euler);
I convert the quaternion to Euler angles vector using glm::eulerAngles. Then simply calculate the difference in the mouse position between the last frame and the current one and add it to the pitch and yaw respectively. Then I convert to a quaternion again using glm::quat's euler angles constructor. This is what the algorithm gives us:
The translation follows a similar pattern but the direction is tied to the Front and Right vectors, as explained earlier.
vec2 mouse_offset = mouse_pos_clicked - input.GetMousePosition();
float velocity_x = mouse_offset.x * move_step;
float velocity_y = mouse_offset.y * move_step;
transform.Translation += camera.Front * velocity_x;
transform.Translation += camera.Right * velocity_y;
Combined with the rotation, this yields the following result:
In order to perform any terraforming on the map, we need to know where our mouse is relative to the mesh. This is done in two main steps:
Projecting the 2D screen coordinates of the mouse into 3D world coordinates and casting a ray from the camera to this point.
Checking the point of intersection between the ray if there is one.
The raycasting in my scene is done by following this article by Anton Gerdelan. Usually, the rendering pipeline converts models from 3D space into 2D space - that is how we get something on our screens! In this case however, we are doing the exact opposite:
glm::vec3 Map_Editor::GetRayFromScreenToWorld(const glm::vec2& screen_pos, const bee::Camera& camera, const bee::Transform& cam_transform)
{
// convert to game window coordinates and sizes
auto scr_width = Engine.Inspector().GetGameSize().x;
auto scr_height = Engine.Inspector().GetGameSize().y;
vec2 g_screen_pos = vec2(0.0f, 0.0f); // g for game
g_screen_pos.x = screen_pos.x - Engine.Inspector().GetGamePos().x;
g_screen_pos.y = screen_pos.y - Engine.Inspector().GetGamePos().y;
// do ray casting
glm::mat4 inverted = glm::inverse(camera.Projection * GetCameraViewMatrix(cam_transform));
glm::vec4 a_near = glm::vec4((g_screen_pos.x - (scr_width / 2)) / (scr_width / 2), -1 * (g_screen_pos.y - (scr_height / 2)) / (scr_height / 2), -1.0, 1.0);
glm::vec4 a_far = glm::vec4((g_screen_pos.x - (scr_width / 2)) / (scr_width / 2), -1 * (g_screen_pos.y - (scr_height / 2)) / (scr_height / 2), 1.0, 1.0);
glm::vec4 nearResult = inverted * a_near;
glm::vec4 farResult = inverted * a_far;
nearResult /= nearResult.w;
farResult /= farResult.w;
glm::vec3 direction = glm::vec3(farResult - nearResult);
glm::vec3 rayDirection = glm::normalize(direction);
return rayDirection;
}
This will give us a normalized vector from the camera's position. Now, we need to check what is the point of intersection between this ray and the terrain mesh.
There are several ways to do this but after some research, I decided to follow the Möller-Trumbore intersection algorithm. Fortunately, glm has a function called glm::intersectLineTriangle which is based on the paper. Checking every triangle in the mesh can be cumbersome but it is acceptable for the current scope of the project and I didn't want to sacrifice accuracy. After experimenting with the function, I realized the documentation about it was a little misleading. Instead of giving us the point of intersection, the output argument glm::vec3 position returns barycentric coordinates as well as the height difference between the triangle and the start of the intersecting ray. Using this information, I can calculate the point of intersection in world space like so:
bool Terrain::FindRayMeshIntersection(const glm::vec3& ray_start, const glm::vec3& ray_dir, glm::vec3& result)
{
bool found_intersection = false;
for (int i = 0; i < m_indices.size(); i += 3)
{
if (!found_intersection)
{
int a = m_indices[i];
int b = m_indices[i + 1];
int c = m_indices[i + 2];
glm::vec3 bary = vec3(0.0f, 0.0f, 0.0f);
if (found_intersection = glm::intersectLineTriangle(
ray_start,
ray_dir,
m_vertices[a].position,
m_vertices[b].position,
m_vertices[c].position,
bary
)
)
{
double u, v, w;
v = bary.y;
w = bary.z;
u = 1.0f - (v + w);
result.x = u * m_vertices[a].position.x + v * m_vertices[b].position.x + w * m_vertices[c].position.x;
result.y = u * m_vertices[a].position.y + v * m_vertices[b].position.y + w * m_vertices[c].position.y;
result.z = u * m_vertices[a].position.z + v * m_vertices[b].position.z + w * m_vertices[c].position.z;
}
}
else
{
break;
}
}
return found_intersection;
}
After I have the point of intersection, I simply snap the tool to the nearest point in the grid.
With that, we have the basic means of interaction with the map!
In the beginning of the project, I decided that all mesh modifications will be made by editing the vertex data. This means that I avoided shader programming which might not have been the wisest decision, but it was a decision nonetheless.
Each vertex stores the position, UVs and normal at this location of the grid. All of this is wrapped in a simple struct:
struct vertex
{
glm::vec3 position = glm::vec3(0.0f);
glm::vec3 normal = glm::vec3(0.0f);
glm::vec2 uv = glm::vec2(0.0f);
int grid_index = 0;
int type = ground_dirt;
};
Vertices vs Grid Points
If we look at the mesh, it looks like the quads (or cells) share the vertices. However, the mesh is structured in such a way that each cell has its own unique vertices. This is very important for assigning correct UVs and we will come back to that in following sections. However, I want to also represent the mesh as a simple grid. This is where the grid points come into play.
struct grid_point
{
int type;
std::vector<std::reference_wrapper<vertex>> vertices;
const vertex& GetFirst() const { return vertices[0]; }
};
In order to get a better understanding of these definitions, let's look at this picture:
The black color represents the vertices while the red represents the grid points. Each grid point has access to all of the vertices that occupy the respective place. For example, Gp4 has reference access to V2, V7, V9 and V12, Gp1 has access to V1 and V4 and so on. This structure gives us the ability to treat the mesh as a simple grid while the vertices of each quad can still have unique properties, like UVs.
Since the mesh will constantly be changed, we need to make sure the vertex buffers get updated every time this occurs. Bee engine provides us with bee::Mesh::SetAttribute - a very convenient templated function which allows us to do just that. It needs to be called for each vertex attribute:
m_mesh->SetIndices(m_indices);
m_mesh->SetAttribute(Mesh::Attribute::Position, m_positions);
m_mesh->SetAttribute(Mesh::Attribute::Normal, m_normals);
m_mesh->SetAttribute(Mesh::Attribute::Texture, m_uvs);
It is important to notice that this puts noticeable strain on the app's performance and needs to be used as little as possible. After all, we reset the whole mesh every time we use a brush!
In order for Level Designers to achieve quality results in map making, they need the proper tools. There are a couple of terraforming brushes that almost any editor has: Raise, Lower, Smooth and Plateau.
I decided to follow the design style of Dawn of War's Mission Editor - their tools provides a lot of creative freedom to the Level Designers when it comes to terraforming, the brushes' algorithms are intuitive and a user can more or less spot the logic behind them.
Raise and Lower
Usually, these types of brushes use some sort of a Gaussian/Normal Distribution or a Multivariate Normal Distribution in the context of a 3D world.
In order to apply the correct amount of "height change", we need to know the distance between the center of the distribution and the current point we are calculating for. After that, we apply the gaussian function. The algorithm looks like so:
float spread = static_cast<float>(m_radius) / 2.4f;
for (auto& index : m_point_indices)
{
// grid coordinates of the current point
vec2 index_coord = vec2(static_cast<int>(index % (terrain->m_width + 1)), static_cast<int>(index / (terrain->m_height + 1)));
// grid coordinates of the center point of the brush (peak of the gaussian distribution)
vec2 center_index_coord = vec2(static_cast<int>(m_hovered_gridpoint_index % (terrain->m_width + 1)), static_cast<int>(m_hovered_gridpoint_index / (terrain->m_height + 1)));
// Pythagorean theorem
double distance_squared = (index_coord.x - center_index_coord.x) * (index_coord.x - center_index_coord.x) + (index_coord.y - center_index_coord.y) * (index_coord.y - center_index_coord.y);
// Gaussian Distribution formula
double height_change = std::exp(-0.5 * distance_squared / (spread * spread));
// Apply to vertices
for (int i = 0; i < terrain->m_grid_points.size(); i++)
terrain->m_grid_points[index].vertices[i].get().position.z += height_change * m_intensity * deltaTime;
}
Plateau
This is a simple function which reads the height of the central grid point and aligns all other hovered grid points to it.
float center_height = terrain->m_grid_points[m_hovered_point_index].GetFirst().position.z;
for (auto& index : m_point_indices)
{
for (int i = 0; i < terrain->m_grid_points[index].vertices.size(); i++)
terrain->m_grid_points[index].vertices[i].get().position.z = center_height;
}
Smooth
For Smoothing, I initially used a simple averaging algorithm to apply a constant offset to each vertex in the brush's radius. However, that didn't make for very appealing shapes so I opted for a Laplacian Distribution later down the line.
The algorithm takes the average height of the current point and all of its neighbors. It then interpolates between the current point's height and the neighbor average we just calculated. The interpolation is controlled by the brush's intensity which is an exposed parameter. Here is my implementation:
int v_width = terrain->m_width + 1;
// int vHeight = terrain->m_height + 1;
for (const auto index : m_point_indices)
{
auto grid_point_pos = terrain->m_grid_points[index].vertices[0].position;
float smoothed_height = grid_point_pos.z;
int neighbor_indices[4] = {index + v_width, index - v_width, index - 1, index + 1}; // up, down, left, right
// sum heights
for (const auto neighbor_index : neighbor_indices)
{
ivec2 neighbor_gridpoint_coord = ivec2(static_cast<int>(neighbor_index % (terrain->m_width + 1)),
static_cast<int>(neighbor_index / (terrain->m_height + 1)));
// check if grid point is within range
if (neighbor_gridpoint_coord.x >= 0 && neighbor_gridpoint_coord.x < terrain->m_width + 1 && neighbor_gridpoint_coord.y >= 0 &&
neighbor_gridpoint_coord.y < terrain->m_height + 1)
{
smoothed_height += terrain->m_grid_points[index].vertices[0].position.z;
}
else
smoothed_height += 0.0f;
}
// average heights
smoothed_height /= 5.0f;
for (int i = 0; i < terrain->m_grid_points.size(); i++)
terrain->m_grid_points[index].vertices[i].position.z = glm::mix(vertexPos.z, smoothed_height, Remap(0.0f, m_max_intensity * deltaTime, 0.0f, 1.0f, m_intensity * deltaTime));
}
Overall, it is quite a fairly simple algorithm but it achieves solid results. In the end, we achieve a result very similar to Dawn of War which was our goal for this feature. This is a win in my book!
The texture of the terrain is always different and as the video shows, we need to be able to apply and remove this "highlight" texture (in this case, it's just grass). All of this is done by calculating the UVs in such a way that every cell on the grid matches a tile from the texture atlas.
This 512x512 atlas is the only thing we need to texture an infinitely large terrain.
But how do we know which grass tile to draw? Every vertex has a type. When we use the Highlight brush, we essentially change the vertices' type.
enum VertexType
{
ground_dirt = 0,
ground_grass = 1,
cliff_dirt = 2, //TODO
cliff_grass = 3, //TODO
};
Then we take the tiles affected by the change and update their UVs by looking up the vertices' types.
Since I am using atlases identical to WarCraft III's, I must also use relatively similar logic. Their whole editor is based on vertices/gird points rather than cells, that is why I built my map maker the same way.
Alongside the main editing toolset, I also developed some quality-of-life features.
Reload Terrain
The user has the ability to create a map of any size they wish. In order to do this, I simply recalculate all the vertex data.
Atlas Swapping
In order to do this, I simply import all atlases in Bee's Resource Manager and store an active reference to them so they won't be tagged as unused which will unload them at the end of the frame. It is a simple but convenient feature, should the user decide to switch the "biome" on the fly.
I laid the foundations for a great tool but there are still many features I want to implement.
Cliffing (or Layering) tool which add low and high ground, similar to WarCraft III and other Blizzard RTS games.
NOTE: In my original blog, this list was bigger. However, some of the features, like the more advanced terraforming algorithms, were implemented so now only cliffing remains! :)
...I can confidently say that this project has come a long way. And with it, I had great opportunities to tackle new challenges and improve skills. The most important quality of this Map Maker is that it can be used by designers in a real development scenario! The journey is far from over and I plan to continue development, either as a part of my course or in my free time. Mesh manipulation, especially in the context of game environments is a fascinating and the skills I obtained by just scratching the surface would prove invaluable for any future project I work on.