I have moved the Box2D repository to GitHub: here.

This is the third site to host Box2D: SourceForge -> Google -> GitHub

]]>Thanks!

]]>You can download the tutorials here: downloads.

Some of the talks have a Keynote version where you can see the embedded animations and videos. The tutorial was recorded on video and should be viewable on the GDC Vault in the next couple weeks.

]]>I’m working on a 3d dynamic aabb tree based on the concepts of Presson’s bullet contribution. I came across a bullet forum thread in which Dirk recommended looking at your box2d implementation. I noticed the major difference is that box2d is a balanced tree using a surface area heuristic while bullet’s is unbalanced with manhattan distance heuristic.

My guess is that keeping your tree balanced resulted in increased maintenance overhead but decreased tree query times. Was there any benchmark of this anywhere that measured if the tradeoff was worth it? Or if there was any other motivation for implementing it this way?

Here is my reply:

A dynamic tree must be actively balanced through rotations (like AVL trees), otherwise you can easily have the tree degenerate into a very long linked list.

For example, imagine a world with many boxes in a row. Say 100 boxes extending along the x-axis. If the boxes are added to the tree at the origin and then incrementally along the x-axis, you will end up not with a tree, but a linked list with 100 elements. You do not need to run a benchmark to know this will yield horrible query performance. Not only will the performance be bad, but you will also need a large stack to process queries (ray casts or overlap tests).

This may sound like a contrived example, but this actually happens a lot in games. Imagine a side scrolling game, as the player moves through the world, objects are added in a linear fashion.

The Bullet code attempts to keep the tree balanced through random removal and insertions. In my tests, I found situations where this would take thousands of iterations to balance the tree reasonably. So I decided to implement tree rotations like you will find in AVL trees. It is actually a little bit simpler since there is no strict right-left sorting necessary.

]]>svn checkout ** http**://box2d.googlecode.com/svn/tags/v2.3.1

You can view the changes here: https://code.google.com/p/box2d/source/list

The main change was the conversion of the Testbed to use GLFW and imgui instead of freeglut and GLUI. GLUI was not supported and I was having trouble with freeglut in newer versions of Visual Studio. If you work in Visual Studio 2013 or Xcode 5, you can open the project file and get started. On other platforms you will have to do more work to set things up. The big difference in running the Testbed is that the working path needs to be setup correctly so that the font may be loaded. In Visual Studio you can set the working path under the project settings. In Xcode you can setup a custom working directory in the Testbed scheme. The Testbed will crash if it fails to load the font file.

]]>Since Google Projects no longer hosts files, I am hosting new GDC files on this site. Get them here.

Next year I’d like to open up the Physics Tutorial a bit. If you’d like to present, please contact me on Twitter @erin_catto.

]]>void ComputeBasis(const Vec& a, Vec* b, Vec* c) { // Suppose vector a has all equal components and is a unit vector: a = (s, s, s) // Then 3*s*s = 1, s = sqrt(1/3) = 0.57735. This means that at least one component of a // unit vector must be greater or equal to 0.57735. if (Abs(a.x) >= 0.57735f) b->Set(a.y, -a.x, 0.0f); else b->Set(0.0f, a.z, -a.y); b = Normalize(b); *c = Cross(a, *b); }

In SSE land you can eliminate the branch using a select operation.

]]>Here is the triangle:

p1 = (0.331916571f, 0.382771939f, -0.465963870f)

p2 = (0.378853887f, -0.246981591f, -0.440359056f)

p3 = (0.331879079f, 0.382861078f, -0.465974003f)

I knew I was getting a bad normal because some of my convex hull code was failing. So I decided to analyze the problem. First of all, this triangle has edge lengths of around 0.6320, 0.6321, and 1e-4. So this is a sliver and it is well known that slivers can be numerically challenging.

I computed a baseline normal using doubles. Doubles yielded the following normal vector (truncated to 6 digits):

normal_doubles = (0.206384 -0.0243884 -0.978167).

I got the same result computing the normal in many different ways. So I am confident that this is accurate.

Then I computed the normal many several different ways using single precision:

– 1 way using Newell’s method as stated in Christer’s book

– 3 ways using edge cross products

– 3 ways using Newell’s method on the triangle shifted to each vertex (suggested by Christer)

– 1 way using Newell’s method on the triangle shifted to the centroid

Here is the code:

#include #include typedef float Real; // change this to double to get a high accuracy result inline float Sqrt(float s) { return sqrtf(s); } inline double Sqrt(double s) { return sqrt(s); } struct Vec { Vec() {} Vec(Real x_, Real y_, Real z_) { x = x_; y = y_; z = z_; } Real x, y, z; }; inline Vec operator-(const Vec& a, const Vec& b) { return Vec(a.x - b.x, a.y - b.y, a.z - b.z); } inline Vec operator+(const Vec& a, const Vec& b) { return Vec(a.x + b.x, a.y + b.y, a.z + b.z); } inline Vec Cross(const Vec& a, const Vec& b) { return Vec(a.y * b.z - a.z * b.y, a.z * b.x - a.x * b.z, a.x * b.y - a.y * b.x); } inline Vec Newell(const Vec& a, const Vec& b) { return Vec((a.y - b.y) * (a.z + b.z), (a.z - b.z) * (a.x + b.x), (a.x - b.x) * (a.y + b.y)); } inline Vec Normalize(const Vec& a) { float d = Sqrt(a.x * a.x + a.y * a.y + a.z * a.z); return Vec(a.x/d, a.y/d, a.z/d); } inline Real Length(const Vec& a) { return Sqrt(a.x * a.x + a.y * a.y + a.z * a.z); } int main() { Vec p1(0.331916571f, 0.382771939f, -0.465963870f); Vec p2(0.378853887f, -0.246981591f, -0.440359056f); Vec p3(0.331879079f, 0.382861078f, -0.465974003f); Vec newell = Normalize(Newell(p1, p2) + Newell(p2, p3) + Newell(p3, p1)); Vec e1 = p2 - p1; Vec e2 = p3 - p2; Vec e3 = p1 - p3; Real d1 = Length(e1); Real d2 = Length(e2); Real d3 = Length(e3); Vec n1 = Cross(e3, e1); Vec n2 = Cross(e1, e2); Vec n3 = Cross(e2, e3); Real s1 = Length(n1); Real s2 = Length(n2); Real s3 = Length(n3); n1 = Normalize(n1); n2 = Normalize(n2); n3 = Normalize(n3); Vec shift1 = Normalize(Newell(p1-p1, p2-p1) + Newell(p2-p1, p3-p1) + Newell(p3-p1, p1-p1)); Vec shift2 = Normalize(Newell(p1-p2, p2-p2) + Newell(p2-p2, p3-p2) + Newell(p3-p2, p1-p2)); Vec shift3 = Normalize(Newell(p1-p3, p2-p3) + Newell(p2-p3, p3-p3) + Newell(p3-p3, p1-p3)); Vec c = p1 + p2 + p3; c.x = c.x / 3.0f; c.y = c.y / 3.0f; c.z = c.z / 3.0f; Vec shiftc = Normalize(Newell(p1-c, p2-c) + Newell(p2-c, p3-c) + Newell(p3-c, p1-c)); printf("%g %g %g\n", newell.x, newell.y, newell.z); printf("%g %g %g\n", n1.x, n1.y, n1.z); printf("%g %g %g\n", n2.x, n2.y, n2.z); printf("%g %g %g\n", n3.x, n3.y, n3.z); printf("%g %g %g\n", shift1.x, shift1.y, shift1.z); printf("%g %g %g\n", shift2.x, shift2.y, shift2.z); printf("%g %g %g\n", shift3.x, shift3.y, shift3.z); printf("%g %g %g\n", shiftc.x, shiftc.y, shiftc.z); }

And here are the results:

0.205007 -0.0244907 -0.978454

0.206384 -0.0243884 -0.978167

0.206417 -0.0243837 -0.97816

0.206384 -0.0243884 -0.978167

0.206377 -0.0243887 -0.978168

0.206453 -0.024386 -0.978153

0.206315 -0.0243899 -0.978182

0.206375 -0.0243882 -0.978169

Exact:

0.206384 -0.0243884 -0.978167

As you can see, Newell’s method is only giving 2 digits of accuracy. Every other method is giving close to 4 digits of accuracy or more. The best result is from n1 and n2. Both of these are cross products that include the shortest edge. This agrees with Gino van den Bergen’s comments on Twitter.

Shifting the triangle origin to one of the vertices or the centroid all improve Newell’s method substantially. I’m a little disappointed that Newell’s method is still not as good as _any_ of the cross products. The price of generality?

Bottom line:

1. Shift a vertex to the origin before applying Newell’s method.

2. Cross products can produce triangle normals that are superior to the results of Newell’s method.

3. Switching to doubles can solve numerical problems, but it is better to first understand the underlying issue.

This is replacing the old framework that uses freeglut and GLUI. GLUI is no longer supported and freeglut has been having some problems with newer versions of Visual Studio. Maybe freeglut 3.0 will be better. GLFW is well maintained and very small. Hopefully it works well on OSX.

Anyhow, I’m excited by the results, so I decided to post a screenshot. I’m hoping to commit this in the next few days. I need to repair text rendering and get the XCode project fixed up.

]]>Here are the changes:

- Polygon creation now computes the convex hull. Vertices no longer need to be ordered.
- The convex hull code will merge vertices closer than dm_linearSlop. This may lead to failure on very small polygons.
- Added b2MotorJoint.
- Bug fixes.

Next year, Google Code will not allow new downloads: . So I am now using SVN tags for releases. If you want to switch over to using tags now, then you can use this:

svn checkout http://box2d.googlecode.com/svn/tags/v2.3.0

I plan to switch to this method for the next release of Box2D. Hopefully Google will make it easier to download a tagged version.

]]>