svn checkout ** http**://box2d.googlecode.com/svn/tags/v2.3.1

You can view the changes here: https://code.google.com/p/box2d/source/list

The main change was the conversion of the Testbed to use GLFW and imgui instead of freeglut and GLUI. GLUI was not supported and I was having trouble with freeglut in newer versions of Visual Studio. If you work in Visual Studio 2013 or Xcode 5, you can open the project file and get started. On other platforms you will have to do more work to set things up. The big difference in running the Testbed is that the working path needs to be setup correctly so that the font may be loaded. In Visual Studio you can set the working path under the project settings. In Xcode you can setup a custom working directory in the Testbed scheme. The Testbed will crash if it fails to load the font file.

]]>Since Google Projects no longer hosts files, I am hosting new GDC files on this site. Get them here.

Next year I’d like to open up the Physics Tutorial a bit. If you’d like to present, please contact me on Twitter @erin_catto.

]]>void ComputeBasis(const Vec& a, Vec* b, Vec* c) { // Suppose vector a has all equal components and is a unit vector: a = (s, s, s) // Then 3*s*s = 1, s = sqrt(1/3) = 0.57735. This means that at least one component of a // unit vector must be greater or equal to 0.57735. if (a.x >= 0.57735f) b->Set(a.y, -a.x, 0.0f); else b->Set(0.0f, a.z, -a.y); b = Normalize(b); *c = Cross(a, *b); }

In SSE land you can eliminate the branch using a select operation.

]]>Here is the triangle:

p1 = (0.331916571f, 0.382771939f, -0.465963870f)

p2 = (0.378853887f, -0.246981591f, -0.440359056f)

p3 = (0.331879079f, 0.382861078f, -0.465974003f)

I knew I was getting a bad normal because some of my convex hull code was failing. So I decided to analyze the problem. First of all, this triangle has edge lengths of around 0.6320, 0.6321, and 1e-4. So this is a sliver and it is well known that slivers can be numerically challenging.

I computed a baseline normal using doubles. Doubles yielded the following normal vector (truncated to 6 digits):

normal_doubles = (0.206384 -0.0243884 -0.978167).

I got the same result computing the normal in many different ways. So I am confident that this is accurate.

Then I computed the normal many several different ways using single precision:

- 1 way using Newell’s method as stated in Christer’s book

- 3 ways using edge cross products

- 3 ways using Newell’s method on the triangle shifted to each vertex (suggested by Christer)

- 1 way using Newell’s method on the triangle shifted to the centroid

Here is the code:

#include #include typedef float Real; // change this to double to get a high accuracy result inline float Sqrt(float s) { return sqrtf(s); } inline double Sqrt(double s) { return sqrt(s); } struct Vec { Vec() {} Vec(Real x_, Real y_, Real z_) { x = x_; y = y_; z = z_; } Real x, y, z; }; inline Vec operator-(const Vec& a, const Vec& b) { return Vec(a.x - b.x, a.y - b.y, a.z - b.z); } inline Vec operator+(const Vec& a, const Vec& b) { return Vec(a.x + b.x, a.y + b.y, a.z + b.z); } inline Vec Cross(const Vec& a, const Vec& b) { return Vec(a.y * b.z - a.z * b.y, a.z * b.x - a.x * b.z, a.x * b.y - a.y * b.x); } inline Vec Newell(const Vec& a, const Vec& b) { return Vec((a.y - b.y) * (a.z + b.z), (a.z - b.z) * (a.x + b.x), (a.x - b.x) * (a.y + b.y)); } inline Vec Normalize(const Vec& a) { float d = Sqrt(a.x * a.x + a.y * a.y + a.z * a.z); return Vec(a.x/d, a.y/d, a.z/d); } inline Real Length(const Vec& a) { return Sqrt(a.x * a.x + a.y * a.y + a.z * a.z); } int main() { Vec p1(0.331916571f, 0.382771939f, -0.465963870f); Vec p2(0.378853887f, -0.246981591f, -0.440359056f); Vec p3(0.331879079f, 0.382861078f, -0.465974003f); Vec newell = Normalize(Newell(p1, p2) + Newell(p2, p3) + Newell(p3, p1)); Vec e1 = p2 - p1; Vec e2 = p3 - p2; Vec e3 = p1 - p3; Real d1 = Length(e1); Real d2 = Length(e2); Real d3 = Length(e3); Vec n1 = Cross(e3, e1); Vec n2 = Cross(e1, e2); Vec n3 = Cross(e2, e3); Real s1 = Length(n1); Real s2 = Length(n2); Real s3 = Length(n3); n1 = Normalize(n1); n2 = Normalize(n2); n3 = Normalize(n3); Vec shift1 = Normalize(Newell(p1-p1, p2-p1) + Newell(p2-p1, p3-p1) + Newell(p3-p1, p1-p1)); Vec shift2 = Normalize(Newell(p1-p2, p2-p2) + Newell(p2-p2, p3-p2) + Newell(p3-p2, p1-p2)); Vec shift3 = Normalize(Newell(p1-p3, p2-p3) + Newell(p2-p3, p3-p3) + Newell(p3-p3, p1-p3)); Vec c = p1 + p2 + p3; c.x = c.x / 3.0f; c.y = c.y / 3.0f; c.z = c.z / 3.0f; Vec shiftc = Normalize(Newell(p1-c, p2-c) + Newell(p2-c, p3-c) + Newell(p3-c, p1-c)); printf("%g %g %g\n", newell.x, newell.y, newell.z); printf("%g %g %g\n", n1.x, n1.y, n1.z); printf("%g %g %g\n", n2.x, n2.y, n2.z); printf("%g %g %g\n", n3.x, n3.y, n3.z); printf("%g %g %g\n", shift1.x, shift1.y, shift1.z); printf("%g %g %g\n", shift2.x, shift2.y, shift2.z); printf("%g %g %g\n", shift3.x, shift3.y, shift3.z); printf("%g %g %g\n", shiftc.x, shiftc.y, shiftc.z); }

And here are the results:

0.205007 -0.0244907 -0.978454

0.206384 -0.0243884 -0.978167

0.206417 -0.0243837 -0.97816

0.206384 -0.0243884 -0.978167

0.206377 -0.0243887 -0.978168

0.206453 -0.024386 -0.978153

0.206315 -0.0243899 -0.978182

0.206375 -0.0243882 -0.978169

Exact:

0.206384 -0.0243884 -0.978167

As you can see, Newell’s method is only giving 2 digits of accuracy. Every other method is giving close to 4 digits of accuracy or more. The best result is from n1 and n2. Both of these are cross products that include the shortest edge. This agrees with Gino van den Bergen’s comments on Twitter.

Shifting the triangle origin to one of the vertices or the centroid all improve Newell’s method substantially. I’m a little disappointed that Newell’s method is still not as good as _any_ of the cross products. The price of generality?

Bottom line:

1. Shift a vertex to the origin before applying Newell’s method.

2. Cross products can produce triangle normals that are superior to the results of Newell’s method.

3. Switching to doubles can solve numerical problems, but it is better to first understand the underlying issue.

This is replacing the old framework that uses freeglut and GLUI. GLUI is no longer supported and freeglut has been having some problems with newer versions of Visual Studio. Maybe freeglut 3.0 will be better. GLFW is well maintained and very small. Hopefully it works well on OSX.

Anyhow, I’m excited by the results, so I decided to post a screenshot. I’m hoping to commit this in the next few days. I need to repair text rendering and get the XCode project fixed up.

]]>Here are the changes:

- Polygon creation now computes the convex hull. Vertices no longer need to be ordered.
- The convex hull code will merge vertices closer than dm_linearSlop. This may lead to failure on very small polygons.
- Added b2MotorJoint.
- Bug fixes.

Next year, Google Code will not allow new downloads: . So I am now using SVN tags for releases. If you want to switch over to using tags now, then you can use this:

svn checkout http://box2d.googlecode.com/svn/tags/v2.3.0

I plan to switch to this method for the next release of Box2D. Hopefully Google will make it easier to download a tagged version.

]]>You can get Glenn Fiedler’s presentation here: click

]]>First comment out this line:

__m128=$BUILTIN(M128)

And add this line:

__m128=<m128_f32[0]>, <m128_f32[1]>, <m128_f32[2]>, <m128_f32[3]>]]>

While the presentation is focused on ragdolls, there is some information that might be useful to many Box2D users, especially if you use joints.

]]>- When a shape is
**not**attached to a body, you can view it’s vertices as being expressed in world-space. - When a shape is attached to a body, you can view it’s vertices as being expressed in local coordinates.