I ported Blobeler to the iPad:
Sorry for the crappy quality
I ported Blobeler to the iPad:
Sorry for the crappy quality
Last week I wrote about the mesh generation algorithm in my game. This week I have focused on working on a 3d renderer that is compatible with OpenGL ES. I have had more social life than what is good if I want to meet my deadline for First Demo.
Creating my own renderer was a good call. As I have said, I am not going to need fancy subdivision data structures. Also, thanks to this decision I have learned a lot of stuff that is relevant to the core mechanic of the game. I feel like I don’t need to say this but it also has been almost too much fun.
The design is going to probably change after I get feedback from the first demo, but one thing is probably set in stone: The core game mechanic is to generate atomic 3d objects out of finger gestures. This means that when the player touches the screen, the game code needs to infer a 3d position out of a 2d screen coordinate.
This was a very fun problem to solve. You can’t invert a projection matrix and even then you shouldn’t invert matrices. The solution is a simple function that, given a point inside the screen and a depth value, returns a point that is inside the view frustum at ‘depth’ distance from the near plane. When this point is projected to the screen, you get the original screen point.
After that, converting to object space is a matter of doing every step that the shader does for the camera transformation, only backwards, and in the CPU.
I started with a standard FPS camera, but then realized that for this kind of application another type of camera is more appropriate. The current camera is defined by a “center point”, a “distance from that point” and a rotation vector. The purpose is to have the camera always focused on an object to make creation of new objects easier. An advantage of this is that the “distance from the center vector” is a pretty decent value for the “frustum depth” when inferring 3d points from 2d gestures. This means that when the demo is running, new objects will be very close to the current, focused, object.
All this object-space-to-screen-space and back code was easy to write because I had all the concepts fresh in my mind. If I didn’t write the rendering code I would still have had to do a lot of research. I also believe that everything is going to end up being more efficient because I am allowed to make a lot of assumptions that, say, OpenSceneGraph is not allowed to make. Things like knowing that the near plane is z=1 and that the frustum is symmetrical give you some small performance benefits. A more significant factor for performance is the complete control that I have over every part of the path from user input to OpenGL calls. There is nothing being computed that doesn’t need to be computed. That would not be true for a more general purpose pipeline.
Another cool thing: The other day I was thinking “how the hell am I going to do the texture mapping after I have a 3d mesh?”. The answer was obvious: I already have it!. I am not creating anything 3d until the last minute. At the last minute what I have is a 2d triangulation and a vertex-height-map. The 2d triangulation is a pretty good UV mapping of the final mesh =).
As for the actual texturing, I think I will just use Perlin Noise for now. Perlin Noise is very good for Wood materials and extremely good for Cloud materials. Dirt is going to be easy. A simple bump map (also Perlin Noise), a brown texture, and high specularity is going to make it look awesome.
I think basically everything in this game is going to have Perlin Noise…
By next week I think I will have a pretty cool YouTube video full of mesh generation goodness. If not, I’ll just write about Emacs 24 or something.
I have been writing a program for a couple of months now. It started as something else but now it’s turning into a game.
First: Video time:
When I used to do 3D modeling and animation, I remember being fascinated when I stumbled upon Take Igarashi’s T.E.D.D.Y. Years later, after I decided I wanted to be a programmer, I attempted to write an implementation. It wasn’t that long ago, but I didn’t have the programming chops to pull it off and I failed miserably.
My second attempt started in November of 2011; about two months before my internship at Google (of which I intend to write a huge blogpost when I know what points I can convey without burning any bridges, especially since I intend to compare my experiences at both Microsoft and Google). These series of posts will be solely a devlog on the most fun project I have ever worked on.
I wanted to make a “Modeler for the Masses” based on Teddy’s paper (freely available and excelent). I don’t even know if Takeo’s paper was the first but I later found out that this idea has already been explored; it is called Sketch Modeling and I assume that it hasn’t been very succesful. I haven’t looked into it further.
My idea is to bring auto 3d mesh generation from 2d shapes to the tablet and smartphone as a core game mechanic. I didn’t want to make a game, especially since I intend to sell this, but this is what I want to work on so this is what I’m working on. It’s too damn fun.
The initial, very mutable, idea is a terraforming sandbox game. You start with nothing but a horizon (with maybe an ocean). And you create your world using some “elements” like cloud, earth or wood. Think of inception’s limbo. Unconstructed dream space.
But before I speak of the fun bits, I want to say something about the money part.
First off, I chose the name for my “studio” (aka me): devolve_inc. It sounded cool. Bought a domain and everything. I even attempted to make a website but I soon realized I was just stroking my ego…
I bought “The Lean Startup”. A really hip business book. I’m not a business guy but I have to become one since this is a one man operation (even though I tried my best to recruit friends). I liked the idea of extremely fast iteration. Even though it is a short book I feel like the main ideas, like the concepts of pivoting and learning could be put in a smaller package. At first I didn’t think this development method could be applied to games. Games, to me, were works of art done in seclusion by a team or individual and then released to a violent environment where it might not succeed. That is why I was afraid of working on a game for profit. Then I thought of Minecraft. Minecraft was developed in an open an continuous fashion. The user base was huge by the time v1.0 was available. Even now I think Minecraft is always changing (I quit after I actually forgot to eat for a whole day while playing it and realized it would eventually kill me). I think games without stories can use the methods in the book easily, not as much as a web app, but easy enough on an environment like the android market, where apps update automatically. This is why I chose Android as my first target: no app review process so I can launch an unfinished product and iterate with feedback. I also chose C++ as the implementation language; I didn’t want to at first, but it really is the only sensible choice for this kind of project. Python + Kivy was another consideration but speed was a concern. A couple days ago I met a game programmer on Google+ who is rewriting the code of his hobby game from Java to C++ because of performance problems. I don’t want that to happen to me. C++ is also the language I know best so it’s not so bad. I have a love-hate relationship with C++. Right now I’m tending to “Love” so I won’t rant. Eventually I will hate it again and I will dedicate a post to C++ hatred. Like any long-term relationship, there are problems and it takes work to make it work.
The fun stuff.
Mesh generation is almost done. It’s essentially finished but it doesn’t look like it because it is still an orthographic projection. I’m working on a 3D renderer right now. I debated with myself on using a middleware engine, but there were enough reasons against it that I chose to roll my own renderer:
*) Portability: I am targeting Android, iOS and development is being done on a linux box.
*) Requirements/Features: A third party engine might be used for a FPS or a Flight Simulator or anything else, the needs of my game are very basic. For instance, I don’t think I need fancy space subdivision data structures for the first version of the game. Also, the nature of the game calls for code that moves between Eye coordinates, World coordinates and Screen coordinates with ease. I feel more comfortable knowing that I have complete control.
*) Fun: It’s just fun. I’m on a self-imposed deadline but I am a hedonistic programmer.
OpenSceneGraph is still an option I haven’t ruled out in case that writing a custom scene manager and renderer proves too much of a burden.
I am using the Armadillo library for Matrix math but I have my own unnecessarily-optimized-with-SIMD-extensions classes for 2d and 3d vectors for the mesh generation code.
Mesh generation closely follows the Teddy paper; the paper could be more detailed on the mesh creation section. For example, searching for “Chordal Axis” got me topology and biology results and it wasn’t well defined in the paper as far as I recall.
The idea is to take a 2d stroke that is not self intersecting as an array of mostly-equidistant points, create a Delaunay triangulation and then subdivide it using a skeleton that is very easily derived from the triangulation (that’s the chordal axis).
My implementation is a simple incremental Delaunay algorithm. Theoretically this algorithm is O(n^2) but in this case, the guarantee that contiguous points are next to each other makes point location really really efficient (Thanks to my friend Jason Bandlow for realizing this; I was too dumb to see it). Thanks to practically-constant point location, on the average case the algorithm is O(n). This is awesome. Without excessive optimization my algorithm’s bottleneck is now the STL’s hash table implementation. I can optimize further but I won’t until there is a real need.
The subdivision process consists of adding points along the inner edges of the triangulation. The z-values of the new points are calculated with an ellipse function that depends on the length of the edge. This means that wider parts of the shape become fatter and thight parts become thinner.
I didn’t follow Teddy’s paper to the letter. I am not subdividing manually. After knowing the location the new points, the algorithm just retriangulates with the original boundary points and the new interior points. This is all 2d, but the creation keeps a height map that is used to create a 3d-mesh really quickly after re-triangulating.
The subdivision is not always clean. There are some inputs it likes and some that it really hates. Sometimes there are triangles that are too big, and sometimes some triangles are too small. Sometimes both. I know how to make it better; in fact, it’s in the design and it has not been implemented. Right now the subdivision is good enough that I believe the end-user won’t notice if the triangulation is not beautiful or uniform. If the difference in triangle size causes rasterization artifacts then I will go ahead and implement the mesh-refining part of the generation algorithm.
This is what I have. I have a long way to go but I have some very smart people that I might convince to help me =)
A couple of days ago I got 3 LIS3LV02DQ accelerometers from Sparkfun electronics.
Today I finished a working demo showing a cube that rotates as you rotate the device.
This accelerometer has some good documentation to learn how to use it. There is also a post in some blog with a specific example for the arduino.
That example didn’t work for me, but when I wasn’t sure about something, it did help to take a peek at it. Reading the datasheet carefully is extremely important.
The arduino has built-in SPI support, which helps a lot since the accelerometer uses SPI (or I^2C) to communicate. However, it is bound to 4 specific pins in the board, and if I’m going to use the 3 accelerometers at the same time without any additional hardware, I am going to have to write a software implementation, since I would be using 12 pins.
The interface to the arduino was written in C++. It uses libSerial, which makes communicating with the arduino almost as trivial as when using Python.
Last night I didn’t sleep since I was at a lan party, so I made a lot of mistakes this morning while working on my little game. I wrote some sprite code to show smoke, after creating a sprite doing some photo-editing in gimp. Then I made a separate display list for the tunnel obstacles so they can cast shadows on the tunnel.
I think it is looking really cool now, so I uploaded a video to YouTube, here it is:
I am still going to put the video on pyCave’s website.
It’s only been a couple of hours since the last post… but this is cool:
The problem was that the python image library needed integers for the color value. Now it’s using opengl (just to draw points), where there is a color function that takes r,b,g float parameters.
It’s starting to look nice! =) And adding reflections is going to be pretty trivial..
I forgot to mention that I managed to make it quite faster by removing a lot of calls to the Python C API. (i.e. defining C structures and using them instead of python tuples)
Now it is running at 1.08 the speed of the original C module. That means that I traded 8% performance for making the code less than half the size and much more easier to maintain… decent trade-off
I had to spend a couple of hours today rewriting some code after reading about Cython.
I rewrote the C module to a Cython file. It runs at about 1.33 the speed of the C module, but I believe that’s mainly because I’m using tuples all over the place instead of C structs. My guess is that the decrease in speed would become very moderate after optimizing. The new module also got rid of a bug in the C module where objects would disappear when being too far from the viewing plane.
Even if after optimizing there isn’t a huge performance gain, I think the penalty would be well worth it. The code is less than half the length and didn’t take long to write..
Here is an image:
Yesterday I had free time for the first time in weeks!! So I worked a lot on the ray tracer…
All data structures are Python objects. However, all the ray calculation is currently done by a module written in C.
Something that is really cool, is that at the innermost loop of the ray calculation, I have a call to a python function, named “shader”. That call isn’t really that expensive and it allows me to run python code at the main loop!! Not only that, the objects (for now only spheres) actually have a reference to the shader they want to use. So I can call basically any python function if I plug it into one of the spheres =)
For now, there is only one loop. The shaders will be in charge of running new loops to calculate lighting, make shadows, reflections, etc.. But for now they only fill the pixel with a determined color.
Here is a 600×300 image that took only 0.079 seconds to draw. It is noteworthy that most of that time was spent drawing pixels with the Python Image Library. A faster library will improve it a lot, and adding support for multiple processors will be easy.
I haven’t really had time to do anything significant, but I have written an extremely primitive ray tracer in python.
It may seem like a stupid idea to make such a computing intensive task in Python, but that’s part of the point… to optimize everything and get some decent speeds. The plan is to use numpy arrays and some inlined C code (with scipy.weave) to optimize the bottlenecks, but to really know where the bottlenecks are I have first to write a working ray tracer. However, in order to do that I need free time, and that’s something I don’t have right now (says he who is writing on his journal), and the free time I have had I have been spending stupidly deliberating on language choice..
Before 23:59 tonight, I have to turn in a little programming homework, to make an blur image filter. It’s a pretty interesting homework for a first programming course, and we have had projects like that all semester.
I am taking a seminar on Mathematics for Computer Graphics, and for the final project I’ll be writing a curve (splines) and surface (nurbs) renderer from scrath. So I will be posting that here =)