Introduction to OpenGL 3.1 - Tutorial 01

In the beginning there was only emptiness...

This is my little donation to fill it up. :o)

It is hard to write a tutorial about new and difficult stuff, but yet simple enough for newcomers. In order to keep this tutorial as short as possible, I will assume that you have already programmed in OpenGL and I will emphasize only differences between OpenGL 2.1 and OpenGL 3.0/3.1. If more details are needed, just ask for it! :o)

This tutorial is based on Rosario Leonardi’s post ( on forum. The problem with Leonardi's code is that it cannot be executed "out of the box", and crushes when DrawArrays() function is called, because vertex attribute arrays are not properly bind to VBOs. Although the code is clear and logical, drivers I had to deal with simply didn't share that viewpoint. So, the code is little bit changed and organized into appropriate classes.

Dealing with OpenGL 3.1 is hard enough, so I'll skip gymnastics with OpenGL extension and use OpenGL Extension Wrangler Library (GLEW). GLEW is a cross-platform open-source C/C++ extension loading library, and can be freely downloaded from the following site: The following snippet of code includes support for GLEW, and should be placed somewhere in your code. If you are building a Visual Studio MFC application, which I recommend, the best place for that is somewhere at the end of stdafx.h file.

//--- OpenGL---
#include "glew.h"
#include "wglew.h"
#pragma comment(lib, "glew32.lib")

We will start with creation of class CGLRenderer. This class should gather together all OpenGL related code. My students will recognize the functions I insisted on during the lectures. The header file is the same as in good old OpenGL 2.1, but the implementation will be severely changed.

class CGLRenderer
     virtual ~CGLRenderer(void);
     CreateGLContext(CDC* pDC);               // Creates OpenGL Rendering Context
     void PrepareScene(CDC* pDC);            // Scene preparation stuff
     void Reshape(CDC* pDC, int w, int h); // Changing viewport
     void DrawScene(CDC* pDC);                 // Draws the scene
     void DestroyScene(CDC* pDC);            // Cleanup
     void SetData();                                        // Creates VAOs and VBOs and fill them with data
     HGLRC m_hrc;                         //OpenGL Rendering Context
     CGLProgram* m_pProgram; // program
     CGLShader* m_pVertSh;       //vertex shader
     CGLShader* m_pFragSh;       // fragment shader
     unsigned int m_vaoID[2];      // two vertex array objects, one for each drawn object
     unsigned int m_vboID[3];       // three VBOs
First we have to create an OpenGL Rendering Context. This is the task for CreateGLContext() function.
bool CGLRenderer::CreateGLContext(CDC* pDC)
     memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
     pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
     pfd.nVersion = 1;
     pfd.iPixelType = PFD_TYPE_RGBA;
     pfd.cColorBits = 32;
     pfd.cDepthBits = 32;
     pfd.iLayerType = PFD_MAIN_PLANE;
     int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd);
     if (nPixelFormat == 0) return false;
     BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd);
     if (!bResult) return false;
     HGLRC tempContext = wglCreateContext(pDC->m_hDC);
     GLenum err = glewInit();
     if (GLEW_OK != err)
          AfxMessageBox(_T("GLEW is not initialized!"));
     int attribs[] =
     m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
     if (!m_hrc) return false;
     return true;
Choosing and setting pixel format are the same as in previous version of OpenGL. The new tricks that should be done are:
 - Create standard OpenGL (2.1) rendering context which will be used only temporarily (tempContext), and make it current
HGLRC tempContext = wglCreateContext(pDC->m_hDC);
 - Initialize GLEW
 GLenum err = glewInit();
 - Setup attributes for a brand new OpenGL 3.1 rendering context
     int attribs[] =

 - Create new rendering context
 m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
 - Delete tempContext

Have you noticed something odd in this initialization? In order to create new OpenGL rendering context you have to call function wglCreateContextAttribsARB(), which is an OpenGL function and requires OpenGL to be active when it is called. How can we fulfill this when we are at the beginning of OpenGL rendering context creation? The only way is to create an old context, activate it, and while it is active create a new one. Very inconsistent, but we have to live with it!

In this example, we’ve created OpenGL 3.1 rendering context (major version is set to 3, and minor to 1). Currently, it requires NVidia’s ForceWare 182.52 or 190.38 drivers (ups, since ver. 190.38 the new name of NVidia display drivers is GeForce/ION). If you don’t have new drivers, or for any other reason creation fails, try to change minor version to 0. OpenGL 3.0 rendering context can be created with NVidia’s ForceWare 181.00 drivers or newer, or ATI Catalyst 9.1 drivers or newer.

After we have created rendering context, the next step is to prepare scene. In the function PrepareScene() we will do whatever we have to do just once, before the scene is drawn for the first time.

void CGLRenderer::PrepareScene(CDC *pDC)
     wglMakeCurrent(pDC->m_hDC, m_hrc);
     glClearColor (1.0, 1.0, 1.0, 0.0);
     m_pProgram = new CGLProgram();
     m_pVertSh = new CGLShader(GL_VERTEX_SHADER);
     m_pFragSh = new CGLShader(GL_FRAGMENT_SHADER);
     m_pProgram->BindAttribLocation(0, "in_Position");
     m_pProgram->BindAttribLocation(1, "in_Color");
     wglMakeCurrent(NULL, NULL);

Vertex shader is very simple. It just sends input values to the output, and converts vec3 to vec4. Constructors are the same as in previous versions of GLSL. The main difference, in regard to GLSL 1.2, is that there is no more attribute and varying qualifiers for variables inside shaders. Attribute variables are now in(put) and varying variables are out(put) for the vertex shaders. Uniforms stay the same.

// Vertex Shader – file "minimal.vert"
#version 140
in vec3 in_Position;
in vec3 in_Color;
out vec3 ex_Color;
void main(void)
     gl_Position = vec4(in_Position, 1.0);
     ex_Color = in_Color;

Fragment shader is even simpler. Varying variables in fragment shaders are now declared as in variables. Take care that the name of in(put) variable in fragment shader must be the same as out(put) variable in vertex shader.

// Fragment Shader – file "minimal.frag"
#version 140
precision highp float;
in vec3 ex_Color;
out vec4 out_Color;
void main(void)
     out_Color = vec4(ex_Color,1.0);

If you have problem with compiling shader’s code (for the reason OpenGL 3.1 is not supported), just change the version number. Instead of 140, put 130. These shaders are so simple that the code is the same in GLSL version 1.3 and version 1.4.

|> Stéphane Denis from realtech VR has reported that precision must be defined
|> in order to compile fragment shader with ATI Catalyst drivers (with GL 3.0 support).
|> Thank you, Stéphane!

Technical details: Let’s see what the "precision" means. Precision qualifiers are a new feature addition to OpenGL ES and enable the shader author to specify the precision with which computations for a shader variable are performed. Variables can be declared to have either low (lowp), medium (mediump), or high precision (highp). For example (OpenGL ES 2.0):

      varying highp vec3 ex_Color;

According to latest GLSL specification (ver.1.40.07, section 4.5 on the page 35 - Precision and Precision Qualifiers):

"Precision qualifiers are added for code portability with OpenGL ES, not for functionality. They have the same syntax as in OpenGL ES, as described below, but they have no semantic meaning, which includes no effect on the precision used to store or operate on variables."

Section 4.5.3 - Default Precision Qualifiers:

"The precision statement precision precision-qualifier type;can be used to establish a default precision qualifier. The type field can be either int or float, and the precision-qualifier can be lowp, mediump, or highp."

"The vertex language has the following predeclared globally scoped default precision statements: precision highp float; precision highp int;The fragment language has the following predeclared globally scoped default precision statements: precision mediump int; precision highp float;"

So, according to GLSL 1.40 spec, we don't need to define default precision, especially when it means nothing to GLSL compiler.

The function SetData() creates VAO and VBOs and fill them with data.

void CGLRenderer::SetData()
     // First simple object
     float* vert = new float[9]; // vertex array
     float* col = new float[9]; // color array
     vert[0] =-0.3; vert[1] = 0.5; vert[2] =-1.0;
     vert[3] =-0.8; vert[4] =-0.5; vert[5] =-1.0;
     vert[6] = 0.2; vert[7] =-0.5; vert[8]= -1.0;
     col[0] = 1.0; col[1] = 0.0; col[2] = 0.0;
     col[3] = 0.0; col[4] = 1.0; col[5] = 0.0;
     col[6] = 0.0; col[7] = 0.0; col[8] = 1.0;
     // Second simple object
     float* vert2 = new float[9]; // vertex array
     vert2[0] =-0.2; vert2[1] = 0.5; vert2[2] =-1.0;
     vert2[3] = 0.3; vert2[4] =-0.5; vert2[5] =-1.0;
     vert2[6] = 0.8; vert2[7] = 0.5; vert2[8]= -1.0;
     // Two VAOs allocation
     glGenVertexArrays(2, &m_vaoID[0]);
     // First VAO setup
     glGenBuffers(2, m_vboId);
     glBindBuffer(GL_ARRAY_BUFFER, m_vboID[0]);
     glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW);
     glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
     glBindBuffer(GL_ARRAY_BUFFER, m_vboID[1]);
     glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), col, GL_STATIC_DRAW);
     glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);
     // Second VAO setup
     glGenBuffers(1, &m_vboID[2]);
     glBindBuffer(GL_ARRAY_BUFFER, m_vboID[2]);
     glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert2, GL_STATIC_DRAW);
     glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
     delete [] vert;
     delete [] vert2;
     delete [] col;

Vertex buffer objects (VBO) are familiar item since OpenGL version 1.5, but the vertex array objects require more explanation. Vertex array objects (VAO) encapsulate vertex array state on the client side. These objects allow applications to rapidly switch between large sets of array state. In addition, layered libraries can return to the default array state by simply creating and binding a new vertex array object. More about VAO can be read in the specification posted on OpenGL Extension Registry (

VAO saves states of up to 16 attribute arrays (if each of them is enabled, their sizes, stride, type, if they are normalized or not, if they contain unconverted integers, vertex attribute array pointers, element array buffer bindings and attribute array buffer bindings). In order to test how it works, we will create two separate (simple) objects with different VAOs.

Reshape() function just sets a viewport.

void CGLRenderer::Reshape(CDC *pDC, int w, int h)
     wglMakeCurrent(pDC->m_hDC, m_hrc);
     glViewport (0, 0, (GLsizei) w, (GLsizei) h);
     wglMakeCurrent(NULL, NULL);
DrawScene(), as its name implies, draws the scene.
void CGLRenderer::DrawScene(CDC *pDC)
     wglMakeCurrent(pDC->m_hDC, m_hrc);
     glBindVertexArray(m_vaoID[0]);                // select first VAO
     glDrawArrays(GL_TRIANGLES, 0, 3);           // draw first object
     glBindVertexArray(m_vaoID[1]);                // select second VAO
     glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0); // set constant color attribute
     glDrawArrays(GL_TRIANGLES, 0, 3);           // draw second object
     glFlush ();
     wglMakeCurrent(NULL, NULL);

As we can see, VAO binding changes all vertex attribute arrays settings. But be very careful! If any vertex attribute array is disabled, VAO loses its binding to corresponding VBO. In that case, we have to call again glBindBuffer() and glVertexAttribPointer() functions. The specification tells nothing about this feature, but it is what we have to do with current version of NVidia drivers.

And, at the end we have to clean up the whole mass...

void CGLRenderer::DestroyScene(CDC *pDC)
     wglMakeCurrent(pDC->m_hDC, m_hrc);
     delete m_pProgram; m_pProgram = NULL;
     delete m_pVertSh; m_pVertSh = NULL;
     delete m_pFragSh; m_pFragSh = NULL;
          m_hrc = NULL;

All comments and questions can be sent to my e-mail (

Aleksandar Dimitrijević



Thanks to Stéphane Denis, the problem with fragment shader is solved! The following line must be added to fragment shader before variables declaration: precision highp float;

Free Hit Counters
Aleksandar Dimitrijevic,
Sep 1, 2009, 11:25 AM
Aleksandar Dimitrijevic,
Sep 1, 2009, 11:33 AM