Minimal OpenGL example

OpenGL is a powerful cross-platform standard for 3D visualisation. OpenGL libraries use a domain specific language (shader language) to describe element-wise operations on vertices (vertex shader) and pixel values (fragment shader). More recent OpenGL versions also support geometry shaders and tesselation shaders (see OpenGL article on Wikipedia).

The learning curve for OpenGL is quite steep at the beginning. The reason is, that a program to draw a triangle is almost as complex as a program drawing thousands of triangles. It is also important to add code for retrieving error messages in order to be able to do development.

I haven't found many minimal examples to understand OpenGL, so I am posting one here. The example draws a coloured triangle on the screen.

#include <math.h>
#include <stdio.h>
#include <GL/glew.h>
#include <GL/glut.h>

const char *vertexSource = "#version 130\n\
in mediump vec3 point;\n\
in mediump vec2 texcoord;\n\
out mediump vec2 UV;\n\
void main()\n\
  gl_Position = vec4(point, 1);\n\
  UV = texcoord;\n\

const char *fragmentSource = "#version 130\n\
in mediump vec2 UV;\n\
out mediump vec3 fragColor;\n\
uniform sampler2D tex;\n\
void main()\n\
  fragColor = texture(tex, UV).rgb;\n\

GLuint vao;
GLuint vbo;
GLuint idx;
GLuint tex;
GLuint program;
int width = 320;
int height = 240;

void onDisplay(void)
  glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
  glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, (void *)0);

void onResize(int w, int h)
  width = w; height = h;
  glViewport(0, 0, (GLsizei)w, (GLsizei)h);

void printError(const char *context)
  GLenum error = glGetError();
  if (error != GL_NO_ERROR) {
    fprintf(stderr, "%s: %s\n", context, gluErrorString(error));

void printStatus(const char *step, GLuint context, GLuint status)
  GLint result = GL_FALSE;
  glGetShaderiv(context, status, &result);
  if (result == GL_FALSE) {
    char buffer[1024];
    if (status == GL_COMPILE_STATUS)
      glGetShaderInfoLog(context, 1024, NULL, buffer);
      glGetProgramInfoLog(context, 1024, NULL, buffer);
    if (buffer[0])
      fprintf(stderr, "%s: %s\n", step, buffer);

void printCompileStatus(const char *step, GLuint context)
  printStatus(step, context, GL_COMPILE_STATUS);

void printLinkStatus(const char *step, GLuint context)
  printStatus(step, context, GL_LINK_STATUS);

GLfloat vertices[] = {
   0.5f,  0.5f,  0.0f, 1.0f, 1.0f,
  -0.5f,  0.5f,  0.0f, 0.0f, 1.0f,
  -0.5f, -0.5f,  0.0f, 0.0f, 0.0f

unsigned int indices[] = { 0, 1, 2 };

float pixels[] = {
  0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f,
  1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f

int main(int argc, char** argv)
  glutInit(&argc, argv);
  glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
  glutInitWindowSize(width, height);

  glewExperimental = GL_TRUE;

  GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
  glShaderSource(vertexShader, 1, &vertexSource, NULL);
  printCompileStatus("Vertex shader", vertexShader);

  GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
  glShaderSource(fragmentShader, 1, &fragmentSource, NULL);
  printCompileStatus("Fragment shader", fragmentShader);

  program = glCreateProgram();
  glAttachShader(program, vertexShader);
  glAttachShader(program, fragmentShader);
  printLinkStatus("Shader program", program);

  glGenVertexArrays(1, &vao);

  glGenBuffers(1, &vbo);
  glBindBuffer(GL_ARRAY_BUFFER, vbo);
  glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

  glGenBuffers(1, &idx);
  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, idx);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);

  glVertexAttribPointer(glGetAttribLocation(program, "point"), 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void *)0);
  glVertexAttribPointer(glGetAttribLocation(program, "texcoord"), 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void *)(3 * sizeof(float)));


  glGenTextures(1, &tex);
  glBindTexture(GL_TEXTURE_2D, tex);
  glUniform1i(glGetUniformLocation(program, "tex"), 0);
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2, 2, 0, GL_BGR, GL_FLOAT, pixels);



  glBindTexture(GL_TEXTURE_2D, 0);
  glDeleteTextures(1, &tex);

  glDeleteBuffers(1, &idx);

  glBindBuffer(GL_ARRAY_BUFFER, 0);
  glDeleteBuffers(1, &vbo);

  glDeleteVertexArrays(1, &vao);

  glDetachShader(program, vertexShader);
  glDetachShader(program, fragmentShader);
  return 0;

The example uses the widely supported OpenGL version 3.1 (which has the version tag 130). You can download, compile, and run the example as follows:

gcc -o raw-opengl raw-opengl.c -lGL -lGLEW -lGLU -lglut


Any feedback, comments, and suggestions are welcome.


Steps towards a space simulator

I am quite interested in how simulators such as the Orbiter space simulator are implemented. A spacecraft can be seen as a rigid object with a moments of inertia tensor. Without any forces acting on the object, the rotational moment of the object does not change. In general the moments of inertia tensor causes the direction of the rotation vector to be different at each point in time even if the rotational moment is not changing. This motion can be numerically simulated using a higher order integration method such as 4th order Runge-Kutta. Here is a video showing the resulting simulation of a cuboid tumbling in space:

Brian Vincent Mirtich's thesis demonstrates how to simulate collisions of two convex polyhedra. Furthermore micro-collisions are used as a simple but powerful method to simulate resting contacts. If the micro-collisions are sufficietly small, a resting object can be approximated with sufficient accuracy:

One still needs to implement friction (also shown in Mirtich's thesis) which requires a numerical integral to compute the friction occuring during a micro-collision. Collisions of polyhedra are demonstrated in Mirtich's thesis as well, however it might be simpler to make use of the GJK algorithm. Planetary bodies, spacecraft, and other non-convex objects could be handled by dividing them into multiple convex objects. It would also be interesting to integrate soft body physics as shown in Rigs of Rods. However the accuracy of Rigs of Rods is not sufficiently high for space simulation. E.g. an object tumbling in space would not preserve its momentum.


In the following examples, dynamic Coloumb friction with the ground is simulated.

This website is now based on Jekyll

Hi, I decided to update the design of my homepage to a responsive HTML design. This makes the web page look nicer on mobile phones. I ended up using the Hyde theme which is based on Poole which in turn is based on Jekyll. I hope you will enjoy the new layout. Here is a demonstration of what Jekyll can do (also to remind myself). See Hyde example for more.

Heading 1

Heading 2

Heading 3

Heading 4

This shows the message style.

The text can be italic or bold.

Quoted text looks like this.

strike through text inserted text

Text with subscript and superscript

Horizontal line


Inline code looks like this.

def highlight
  # This is Ruby code
  some.source code

Here is a Gist from Github:

400: Invalid request

  • Bullet
  • list
    • of
    • things
  1. Enumerated
  2. List
    1. of
    2. things
Name Upvotes Downvotes
Totals 21 23
Alice 10 11
Bob 4 3
Charlie 7 9

Raspberry Pi Zumo robot

Pimoroni sells a Zumo chassis kit which is a low-cost tracked mobile platform. To build a robot, only the following additional components are required (see Github repository for more details):

  • micro metal gear motors
  • Raspberry Pi Zero W and Micro SD card
  • portable power bank
  • 4 AA rechargable batteries
  • H bridge motor driver
  • Raspberry Pi camera module
  • wires

In his seminal publication Neural Network Vision for Robot Driving Dean A. Pomerleau shows that one can train a neural network to drive using low-resolution camera images (also see video). In a similar fashion the Zumo robot can first be controlled using an XBox controller. 10 times a second a video frame together with the current motor settings are recorded. The images are downsampled to 32x24 pixels. A neural network with two hidden layers with 20 units each is trained. The output of the neural network are two vectors (one for the left and one for the right motor) with 11 units each. The first unit corresponds to driving backwards with full speed. The last unit corresponds to driving forward with full speed. Initially the robot frequently looses track and a manual override is required. Each time the manual override is used, new training data is recorded. After several iterations the robot finally achieves the desired behaviour and it can patrol the kitchen as shown in the video below.

Regularisation was used to reduce the variance. The bias of the network is quite high but this is probably due to conflicting training data (i.e. over time different drive speeds are used at the same position in the kitchen). Note that the experiment works best in controlled lighting conditions. Otherwise much more training data is required to cope with changes in illumination.

See also:

Sainsmart 6-dof robot arm

Sainsmart is selling a 6-dof robot arm. One can control the robot arm using an Arduino controller and a DFRobot I/O expansion shield. I mounted a Sunfounder Rollpaw gripper on the robot (note that the servo shaft is not compatible with the Sainsmart servos and I had to replace the wrist servo of the Sainsmart robot). I developed some software to perform smooth and synchronized motion with the robot drives. The robot can be controlled using a serial terminal. Alternatively one can use an XBox controller. The software performing inverse kinematics is demonstrated in the following video.

See also: