Head Slapper

Post date: Oct 02, 2008 6:24:37 PM

So I've got a lot of wrapper function functions to OpenGL in my gl namespace, which exists within my render namespace. In general a simple one might look like this

inline static void scale(const float &x, const float &y, const float &z)

{

glScalef(x, y, z);

}

So, instead of typing glScalef(x, y, z), I can just type gl::scale(x, y, z) which doesn't really save me any keystrokes, other than not having to hit the shift key, but I do get the benefit of intellisense auto completion in Visual Studio, and obviously, if I do a using namespace gl, I can drop the gl::. Of course, I would never be satisfied with just one function, so really I end of writing a bunch of similar functions, in this case, for scaling

/////////

//Scale//

/////////

inline static void scale(const float &x, const float &y, const float &z) { glScalef(x, y, z); }

inline static void scale(const double &x, const double &y, const double &z) { glScaled(x, y, z); }

inline static void scale(const float &f) { glScalef(f, f, f); }

inline static void scale(const double &d) { glScaled(d, d, d); }

inline static void scale(const vec3<float> &v) { glScalef(v.x, v.y, v.z); }

inline static void scale(const vec3<double> &v) { glScaled(v.x, v.y, v.z); }

inline static void scale_x(const float &x) { glScalef(x, 1, 1); }

inline static void scale_x(const double &x) { glScaled(x, 1, 1); }

inline static void scale_y(const float &y) { glScalef(1, y, 1); }

inline static void scale_y(const double &y) { glScaled(1, y, 1); }

inline static void scale_z(const float &z) { glScalef(1, 1, z); }

inline static void scale_z(const double &z) { glScaled(1, 1, z); }

And this is already kinda dumb, because I should be calling gl::scale from all the subsequent functions, so that when glScale gets deprecated and I have to implement my own version, I only have to change those first two functions.

/////////

//Scale//

/////////

inline static void scale(const float &x, const float &y, const float &z) { glScalef(x, y, z); }

inline static void scale(const double &x, const double &y, const double &z) { glScaled(x, y, z); }

inline static void scale(const float &f) { gl::scale(f, f, f); }

inline static void scale(const double &d) { gl::scale(d, d, d); }

inline static void scale(const vec3<float> &v) { gl::scale(v.x, v.y, v.z); }

inline static void scale(const vec3<double> &v) { gl::scale(v.x, v.y, v.z); }

inline static void scale_x(const float &x) { gl::scale(x, 1, 1); }

inline static void scale_x(const double &x) { gl::scale(x, 1, 1); }

inline static void scale_y(const float &y) { gl::scale(1, y, 1); }

inline static void scale_y(const double &y) { gl::scale(1, y, 1); }

inline static void scale_z(const float &z) { gl::scale(1, 1, z); }

inline static void scale_z(const double &z) { gl::scale(1, 1, z); }

And I'd actually already done this for many of my functions, and this is pretty nice because I have less worrying to do about types, floats vs doubles vs. whatever. This is nice because when I switch types in the rest of my program I don't get a million warnings/errors about type conversions, the right function is called on the basis of type of the parameters passed to it.

But this is still stupid, because I'm still doing too much work, the compiler can make half those functions for me if I write them using templates. If your confused hopefully this final version of the code above will make make things clear.

/////////

//Scale//

/////////

inline static void scale(const float &x, const float &y, const float &z) { glScalef(x, y, z); }

inline static void scale(const double &x, const double &y, const double &z) { glScaled(x, y, z); }

//Scale Uniformly

template<class T> inline static void scale(const T &t) { gl::scale(t, t, t); }

//Scale by vector

template<class T> inline static void scale(const vec3<T> &v) { gl::scale(v.x, v.y, v.z); }

//Scale along 1-axis

template<class T> inline static void scale_x(const T &x) { gl::scale(x, 1, 1); }

template<class T> inline static void scale_y(const T &y) { gl::scale(1, y, 1); }

template<class T> inline static void scale_z(const T &z) { gl::scale(1, 1, z); }

Not like this is anything too exciting or revolutionary, or at all. I just can't believe I'd been doing things in such a stupid manner, what with all the far more bizarre things I've been using templates for.

Note: all the 1's and 0's above used to be 1.0's and 0.0's, but the compiler would get confused and throw a

error C2666: 'render::gl::translate' : 2 overloads have similar conversions

omitting the decimals seems to fix the problem in the simplest manner, I believe I remember reading something about this, and the difference between the compiler upconverting vs downconverting ints, floats, and doubles, but I'm not going to get into that here.