Last time, we talked about IndexBuffers, and how they can provide a more efficient way to render objects. They are also instrumental in understanding the Mesh object (remember Meshes?), which often plays a big role in scene generation. In particular, you must understand IndexBuffers if what you want to do is generate your own Meshes.
But why would I want to generate my own Mesh? Isn't it easier just to load one from a .x file? Aren't there thousands of Meshes already available for free and for purchase, and can't I get converters that export from my favorite rendering tool?
Of course, it is easier to simply load an existing Mesh from a file. But there are times when what you're trying to display can't be recorded statically like that. For example, if you're rendering some sort of 3D visualization plugin for an MP3 player, you may want to generate shapes based on the music that's playing. Or for scientific applications, you might want to draw a particular type of curve that has been fit to some measured data. In each of these cases, it's usually easier to generate a Mesh object on the fly than it is to create a corresponding .x file and load that.
Another question you might ask is, "OK – since the data is dynamic, why don't I just render it myself using DrawIndexedPrimitives rather than relying a Mesh?" Again, you'd have a good point: there's nothing you can do with a Mesh that you can't do yourself working with VertexBuffers and IndexBuffers directly. In fact, this is true of all the functionality in the Microsoft.DirectX.Direct3DX (D3DX) namespace – all of it is built up from the core functionality in Microsoft.DirectX.Direct3D, and adds no “magic”. Of course, it does provide a whole lot of code that we would otherwise have to write ourselves, and that's a Good Thing.
So, now that we've convinced ourselves that it might be desirable to generate a Mesh programmatically, let's take a look at what's involved.
The first thing to do is actually create the Mesh object. As you might expect, we do this by invoking the Mesh constructor, like so:
// Create new cube mesh with: mesh = new Mesh( 12, // 12 faces 8, // 8 vertices 0, // no flags VertexFormats.Position, // Position information only device);
The parameters here are fairly self-explanatory – the number of faces, the number of vertices that make up those faces, a bunch of flags that we'll cover at a later date, a VertexFormat (remember VertexFormats?), and of course the ubiquitous Device. In this case, we're including only position data to keep it simple, but we could have included data about normals, textures, materials, or other attributes of our shape.
Next we're going to need an IndexBuffer and a VertexBuffer that describe the shape of our Mesh. We'll go ahead and use the cube from our example last time, which means we have eight vertices (the corners) and 36 entries in our index buffer (six faces, two triangles each, three vertices per triangle). We've already talked about how to create an indexed cube, so I won't walk through that code in detail, although it's included in the complete listing at the bottom of this article.
With the VertexBuffer and the IndexBuffer created, the next thing we need to do is to associate them with the Mesh that will be managing them. That code looks like this:
mesh.SetVertexBufferData(vertices, LockFlags.None); mesh.SetIndexBufferData(indices, LockFlags.None);
Again, this code is pretty straightforward: we simply pass in the appropriate object. The second parameter has to do with memory management, and doesn't really apply to us right now.
So far, so good. And in fact our Mesh object is now usable as-is. But there's one more thing we can do which is both easy and beneficial: we can optimize our Mesh. It turns out to be more efficient for Direct3D to render our Mesh object if the vertices and indices have been arranged carefully.
Allow me to explain. Each of the vertices in our cube is used to render at least three and as many as six different faces (think about it). If a vertex is used in multiple faces like this, it probably makes sense to render all of the faces that the vertex belongs to as close together as possible, because that means the data for that vertex is more likely to be in cache memory. If we were to just sort of randomly wander around the object, rendering a face here and a face there, vertices that hadn't been used in a while might fall out of cache memory, and would need to be fetched back into it. So certain rendering orders are faster than others.
The good news is, you don't have to do this potentially complicated computation yourself. The Mesh class has built-in functionality that will do it for us. Here's what the code for optimization looks like:
mesh.OptimizeInPlace(MeshFlags.OptimizeVertexCache, adjacency);
There's another variant of this method call – Mesh.Optimize – that returns a new Mesh. Here we're just optimizing the one we have in-place.
The two arguments to this method indicate how much optimization we'd like (an advanced topic that I don't want to talk about right now – MeshFlag.OptimizeVertexCache is the documentation-recommended value), and something called adjacency information. Adjacency information simply tells the Mesh which faces share edges with which others. It takes the form of an array of ints, grouped by threes. Each group of three represents one of the triangles in the Mesh, and gives the number of the three triangles that this triangle shares an edge with – which ones it's adjacent to, in other words. The number -1 is used to describe an edge that's shared with no other triangles.
If it sounds like it might be a little tough to calculate adjacency, don't worry: it's not. There's a handy helper method – Mesh.GenerateAdjacency – that does it for you. You call it like this:
int[] adjacency = new int[mesh.NumberFaces * 3]; mesh.GenerateAdjacency(0.01F, adjacency);
The array we use needs to be big enough to hold three numbers for each face. Fortunately, theMesh.NumberFaces property makes this easy to calculate. The call to GenerateAdjacency takes this array and populates it with the appropriate adjacency information.
Note that GenerateAdjacency takes an additional parameter, too. This is the epsilon for the adjacency information, and it's there to take into account that floating point numbers are notoriously inaccurate when dealing with small differences. Therefore, instead of using exact equality when the position of two vertices, GenerateAdjacency will check to see if the values are within the epsilon that you provide. If they are, the two vertices are considered to be the same. This is convenient when you calculate the positions based on some formula or external data, when two vertices might wind up with a very, very slightly different position because of rounding error, even though they're supposed to be in the same place.
Here I've chosen 0.01F for my epsilon, even though because I set up my cube by hand I know that I don't have two vertices that represent the same location. You should choose a value for epsilon based on your knowledge of your data.
At this point, we're basically done. We've generated our Mesh and optimized it. About the only thing left to do is to figure out how many subsets it has. Remember, subsets are parts of a Mesh that are all drawn together because they have common information, like a texture or a material. Getting the number of subsets our mesh has is easy:
numSubSets = mesh.GetAttributeTable().Length;
We'll use this number in our Render method when we loop over the subsets in the mesh, rendering each one.
The code to render the Mesh that we've created is essentially the same as the code to render the Mesh that we loaded from a file. That is, we use Mesh.DrawSubset to get the triangles on the screen. The only thing I've done substantially different is to add some code to also draw a wireframe version of the cube, so you can see the triangles it's made up of. That's just a matter of rendering the cube twice, once with
device.RenderState.FillMode = FillMode.WireFrame;
And once with
device.RenderState.FillMode = FillMode.Solid;
Which is the normal, default setting. Additionally, when rendering the solid version of the cube, I've given it a z-bias using this code
device.RenderState.DepthBias = 0.1F;
A z-bias is just a small number that's added to the view space z value of every pixel. Remember, in a left-handed coordinate system, a bigger value for z in view space means “farther away from the viewer.” It's needed in our case because both the wireframe and the solid version of our cube are being drawn in exactly the same place. Since we have z-buffering enabled (remember z-buffers?), and because floating point math can result in small variations between numbers that are supposed to be the same, without a z-bias, parts of the wireframe would disappear “behind” the solid cube. Remove the depth bias code and you'll see what I mean.
As you can see, creating your own Mesh objects dynamically isn't much harder than loading them from disk. In fact, the hardest part by far lies in creating the vertex data, a task you'd need to do anyway if you weren't using a Mesh. As usual, I've included a complete program at the end of this article. The identical code can be downloaded as a convenient zip file here.
Next time, we'll talk about how to work with text: both 2D and 3D rendering of fonts.
using System; using System.Drawing; using System.Windows.Forms; using System.Diagnostics; using Microsoft.DirectX; using Microsoft.DirectX.Direct3D; namespace Craig.Direct3D { public class Game : System.Windows.Forms.Form { static void Main() { Game app = new Game(); app.Text = "Creating a Mesh"; app.InitializeGraphics(); app.Show(); while (app.Created) { app.Render(); Application.DoEvents(); } } private Device device; private Mesh mesh; private int numSubSets; // Has the device been lost and not reset? private bool deviceLost; // We'll need these to Reset successfully, so hold them here private PresentParameters pres = new PresentParameters(); protected bool InitializeGraphics() { // Set up our presentation parameters as usual pres.Windowed = true ; pres.SwapEffect = SwapEffect.Discard; pres.AutoDepthStencilFormat = DepthFormat.D16; pres.EnableAutoDepthStencil = true; device = new Device(0, DeviceType.Hardware, this , CreateFlags.SoftwareVertexProcessing, pres); // Hook the DeviceReset event so OnDeviceReset will get called every // time we call device.Reset() device.DeviceReset += new EventHandler( this .OnDeviceReset); // Similarly, OnDeviceLost will get called every time we call // device.Reset(). The difference is that DeviceLost gets called // earlier, giving us a chance to do the cleanup that needs to // occur before we can call Reset() successfully device.DeviceLost += new EventHandler( this .OnDeviceLost); // Do the initial setup of our graphics objects SetupDevice(); return true ; } protected void OnDeviceReset( object sender, EventArgs e) { // We use the same setup code to reset as we do for initial creation SetupDevice(); } protected void OnDeviceLost( object sender, EventArgs e) { } protected void SetupDevice() { SetupLights(); device.RenderState.ZBufferEnable = true; // And create the graphical objects CreateObjects(device); } protected void SetupLights() { device.RenderState.Lighting = true; device.RenderState.Ambient = Color.White; } protected void SetupMaterials(Color color) { Material mat = new Material(); // Since we haven't set up any normals for the object, // we're stuck with ambient lighting mat.Ambient = color; device.Material = mat; } protected void CreateObjects(Device device) { // Create new cube mesh with: mesh = new Mesh( 12, // 12 faces 8, // 8 vertices 0, // no flags VertexFormats.Position, // Position information only device); // Set up the 8 corners of the cube float front = -1; float back = 1; float left = -1; float right = 1; float top = 1; float bottom = -1; CustomVertex.PositionOnly[] vertices = new CustomVertex.PositionOnly[] { new CustomVertex.PositionOnly(left , bottom, front), // 0 new CustomVertex.PositionOnly(right, bottom, front), // 1 new CustomVertex.PositionOnly(left , top , front), // 2 new CustomVertex.PositionOnly(right, top , front), // 3 new CustomVertex.PositionOnly(left , bottom, back ), // 4 new CustomVertex.PositionOnly(right, bottom, back ), // 5 new CustomVertex.PositionOnly(left , top , back ), // 6 new CustomVertex.PositionOnly(right, top , back ) // 7 }; short leftbottomfront = 0; short rightbottomfront = 1; short lefttopfront = 2; short righttopfront = 3; short leftbottomback = 4; short rightbottomback = 5; short lefttopback = 6; short righttopback = 7; // Set up the index information for the 12 faces short[] indices = new short[] { // Left faces lefttopfront, lefttopback, leftbottomback, // 0 leftbottomback, leftbottomfront, lefttopfront, // 1 // Front faces lefttopfront, leftbottomfront, rightbottomfront, // 2 rightbottomfront, righttopfront, lefttopfront, // 3 // Right faces righttopback, righttopfront, rightbottomfront, // 4 rightbottomfront, rightbottomback, righttopback, // 5 // Back faces leftbottomback, lefttopback, righttopback, // 6 righttopback, rightbottomback, leftbottomback, // 7 // Top faces righttopfront, righttopback, lefttopback, // 8 lefttopback, lefttopfront, righttopfront, // 9 // Bottom faces leftbottomfront, leftbottomback, rightbottomback, // 10 rightbottomback, rightbottomfront, leftbottomfront // 11 }; mesh.SetVertexBufferData(vertices, LockFlags.None); mesh.SetIndexBufferData(indices, LockFlags.None); int[] adjacency = new int[mesh.NumberFaces * 3]; mesh.GenerateAdjacency(0.01F, adjacency); mesh.OptimizeInPlace(MeshFlags.OptimizeVertexCache, adjacency); numSubSets = mesh.GetAttributeTable().Length; } protected void SetupMatrices() { float angle = Environment.TickCount / 500.0F; device.Transform.World = Matrix.RotationYawPitchRoll(angle, angle / 3.0F, 0); device.Transform.View = Matrix.LookAtLH( new Vector3(0, 0.5F, -1000), new Vector3(0, 0.5F, 0), new Vector3(0, 1, 0)); device.Transform.Projection = Matrix.PerspectiveFovLH(( float )Math.PI/400.0F, 1.0F, 900.0F, 1100.0F); } protected void Render() { if (deviceLost) { // Try to get the device back AttemptRecovery(); } // If we couldn't get the device back, don't try to render if (deviceLost) { return ; } // Clear the back buffer device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0F, 0); // Ready Direct3D to begin drawing device.BeginScene(); // Set the Matrices SetupMatrices(); // Draw it again in wireframe, in white device.RenderState.FillMode = FillMode.WireFrame; SetupMaterials(Color.White); // Draw the wireframe cube for (int i = 0; i < numSubSets; ++i) { mesh.DrawSubset(i); } // Draw the cube in blue device.RenderState.FillMode = FillMode.Solid; SetupMaterials(Color.Blue); // Since the cube and the wireframe cube are at exactly // the same z depth, it's not clear which one will be // allowed to render - so we cheat by adding a little bit // to the z-value of the cube, ensuring that the wireframe // will always render over the cube instead of vice-versa. device.RenderState.DepthBias = 0.1F; // Draw the cube for (int i = 0; i < numSubSets; ++i) { mesh.DrawSubset(i); } device.RenderState.DepthBias = 0F; // Indicate to Direct3D that we're done drawing device.EndScene(); try { // Copy the back buffer to the display device.Present(); } catch (DeviceLostException) { // Indicate that the device has been lost deviceLost = true ; } } protected void AttemptRecovery() { int res; device.CheckCooperativeLevel(out res); ResultCode rc = (ResultCode) res; if (rc == ResultCode.DeviceLost) { } else if (rc == ResultCode.DeviceNotReset) { try { device.Reset(pres); deviceLost = false ; } catch (DeviceLostException) { // If it's still lost or lost again, just do // nothing } } } } }