Building a (Mini) 3D Flutter Game Engine - Part 1
I've been working on flutter_filament for some time now, a package that enables cross-platform 3D rendering in Flutter apps with the Filament Physically Based Rendering library.
I still haven't managed to write a blog post on the package itself, but if you're interested, in the meantime you can check out a presentation I gave at the Singapore Flutter meetup here and this GitHub issue for a high-level overview.
From Renderer To Game Engine
I'm not a game developer or designer (aside from a few toy projects in my university days), but I'd been itching to extend flutter_filament
with a basic game engine for some time now.
When Google launched a Flutter game competition earlier in the year, it gave me a convenient excuse to set aside some time to do so.
That being said, no-one was sponsoring me to write a game, let alone a game engine. With bills to pay, I couldn't take too much time away from paying work, so I needed to be very judicious and implement only the absolute bare minimum to make a game (hence the "mini" game engine).
The initial game concept was intentionally basic : paddle a canoe down a river, fish rubbish out of the water with a net. Different objects would be worth different points, and the objective would be to collect as many points as possible before reaching the end of the river.
You've probably realized that this is not the game I ended up submitting for the competition. After I went through the engine work below, I started afresh with a different concept, which I'll cover in part 2.
The 2D UI overlay would be handled by Flutter, and I already had a renderer in flutter_filament, so I was able to render/transform 3D objects, add lights/skybox, and play animations.
What I needed to add, though, was:
- the ability to attach an over-the-shoulder camera
- collision detection to stop the canoe itself clipping through the river banks, and to detect when an object was caught by the net
- keyboard/mouse controls for moving the character and triggering the animations
(1) was very straightforward, simply because I could cheat and avoid the issue by exporting the canoe/character model from Blender with a camera node as a child. This was time away from paid work, I had no shame in taking shortcuts every which way I went!
Collision detection
Collision detection wasn't going to be as trivial as that, but I was hoping I could get away with something simple like the following pseudo-code:
void collides(Entity entity1, Entity entity2) {
// implement this
}
void calculateCollisions() {
for(auto entity1 : scene) {
for(auto entity2 : scene) {
if(entity1 != entity2 && collides(entity1, entity2)) {
collisionCallback(entity1, entity2);
}
}
}
}
void renderLoop() {
while(true) {
calculateCollisions();
render();
}
}
This O(N^2) complexity in the hot path would obviously be terrible for a real game engine, but this concept only needed a few dozen renderable entities so I didn't expect it to be a problem.
Implementing the actual collides(...)
method didn't appear too difficult at first glance either. The Filament library (and the glTF format more generally) expose axis-aligned bounding boxes for assets, so I thought I could get away with something as simple as:
auto aabb1 = worldTransform(entity1,entity1.aabb);
auto aabb2 = worldTransform(entity2,entity2.aabb);
for(auto vertex : aabb1.vertices) {
if(aabb2.contains(vertex)) {
return true;
}
}
However, there were a few problems with this simple approach.
One is that Filament uses the rest pose of the model to calculate the bounding box when it is imported. This isn't a problem for static (i.e. non-animated) models - like determining whether the canoe hit the river bed. But for a character model with a swipe animation, the bounding box remains fixed and doesn't account for the fact that the animated limb is now "outside" this box.
Another is that this is an axis-aligned bounding box, meaning that the extent along each axis changes depending on the rotation of the model. This means that the AABBs can intersect, even though visually, there's no collision.
The other problem is that the bounding box of the top-level entity is (obviously) larger than the bounding box of the actual object we want to test (the end of the net). We only want to award points when the net hits the floating object in the water, not (for example) when the canoe reverses into one.
In keeping with the spirit of "do as little work as possible", I avoided the issue again by attaching a number of small hidden cubes in Blender to the collidable parts (the riverbanks, the canoe and the front of the canoe where the scoop animation would intersect with the water).
Keyboard/mouse control
At this stage I was mostly working on the desktop (MacOS) version, so I wanted to be able to use conventional FPS controls to move the character (WASD keys for forward/back/strafe, mouse movement for look and the mouse button for the "swing net" action).
In a normal game engine, you'd expect to be able to collect/process user input inside the main loop:
void main() {
while(true) {
processInput();
calculateCollisions();
waitForVsync();
render();
}
}
This doesn't quite fit the way that flutter_filament
is structured though, where the Flutter UI loop is running on the main thread and a separate render loop running on a background thread.
There's no inherent reason why we couldn't process keyboard and mouse events in both loops. But with the Flutter framework providing tools to handle user input across all supported platforms, why reinvent the wheel?
I had already implemented basic manipulation via Flutter GestureDetector
widgets for the main 'scene' camera, so it was relatively straightforward to extend this to manipulating the camera attached to the model.
To maintain consistent movement speed (and to stop the transform updates when a collision is detected), though, I needed to queue up the user input so it was only processed once per (render) frame.
As a side note, I don't think there's any inherent reason why I couldn't restructure flutter_filament
to run more like a conventional game engine loop:
void main() {
while(true) {
processInput();
calculateCollisions();
waitForVsync();
flutterEngine.tick();
render();
}
}
In this structure, the Flutter engine would render into an offscreen render target, and the game engine is then responsible for compositing this at the top of the scene view. This strikes me as functionally similar to the Flutter "add-to-app" scenario, so this is probably feasible - I just haven't had any compelling reason to do so yet.
In the next post, I'll go into some more detail on the engine work needed for the second iteration (GPU instancing, menu callbacks on entity mouseover).