Triangulation, meshes and the like

So instead of working on my GUI app, I’ve been messing around with OpenGL in Processing. It’s what drew me to that language, and still does.I have already been able to turn an image (from a Kinect depth sensor, btw) into a point cloud, which wasn’t too hard, so I decided to look up onto surface meshes and the like.

The most common way to do a surface mesh is Delaunay triangulation, as known from FEM. It’s quite popular (in fact, my coworker and I had been talking about it on Monday in a very different context, which is why I remember it.) There are refinements, changes and other methods on how to create a mesh, but I settled for this one.

I learned how to import libraries into Processing (umm.. I’ll need to consider the absolute path, btw, because I want to set it up in Google Drives for multiple platforms..), because tomc and antiplastic wrote a nice implementation of Delaunay. The examples are in 2D, but it’s no hassle at all to translate them to 3d and OpenGL.

In fact, the only things I changed from my old program were these:

I moved the for loop that looks through the image into the setup function / which is more suitable anyways. Then I added:

int z = (int)brightness(img.pixels[loc]);
points.add(new PVector(x,y,z));

I did the triangulation like in the example, removed the points drawing (ellipse seems to be 2D-only, the third parameter is width, not z, so no points (well, you could draw them in OpenGL, but why bother?) and then added new parameters to vertex.

stroke(0, 40);
fill(100, 255);
beginShape(TRIANGLES);
for (int i = 0; i < triangles.size(); i++) {
Triangle t = (Triangle)triangles.get(i);
vertex(t.p1.x, t.p1.y, t.p1.z);
vertex(t.p2.x, t.p2.y, t.p2.z);
vertex(t.p3.x, t.p3.y, t.p3.z);
}
endShape();

Easy, right?

It is. I also added this piece of code (inspired by the OpenGL light examples)

void useLight() {
spotLight(255, 255, 255, // Color
200, 200, 500, // Position
0, -1, -10, // Direction
PI , 1); // Angle, concentration
}

to get one of those nice OpenGL spotlights. I called it in draw().

Later, I used the fill() function and mapped it with some data from a color picture showing the identical scene as the depth pictures. The result was quite pretty, but unfortunately, I only have work data, so no post of that picture. Theoretically, I could even build template data from that one, but with this kind of resolution, I’m not aiming for hyperrealism, anyway.
But: Smoothing things out would be nice.

As a final note, I will use sort of a point/line-cloud with my application – without the OpenGL and with regular Processing 3D code. It’s significantly faster, something you can measure with this snippet:


fill(255);
text(frameRate, 20, 20);

and you can use 100% of your data, which gives quite nice and realistic 3D models.

During research, I stumbled upon http://www.openprocessing.org. A lot of the snippets here aren’t very complex and it’s a great way to find out about new functions / capabilities I’m not too aware of. Often, it’s “Hey, how did he / she do that?” that leads to something new..

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: