Grid Collision Geometry

Triverse presents some interesting collision problems. On the one hand, having a shape defined in a grid makes partitioning, and thus queries, simpler to implement. On the other hand, both terrain and ships are potentially large grids, which can make for complex collisions. I want to go through some of the basic parts of the collision system that enable efficient intersection tests in the game.

Consider a binary image consisting of black and white pixels to represent an object. Here’s a blob/asteroid I generated:

The asteroid is a black region of pixels in a regular square grid. We want to query this grid space for collisions with objects, which is a fairly common scenario given the explosion of voxel-based games. If the colliding objects are close to the size of the pixels/voxels, it’s trivial to look up the surrounding cells. For larger objects, or for general proximity queries within a space such as a circle/sphere, we need to do a bit more work.

The full problem I’m facing is colliding potentially large, changing grids with other objects, which may in fact be grids themselves. Each of the objects reacts dynamically according to its physical properties and the collision. The grids may be at arbitrary positions and orientations. As far as scale, they can be 1k x 1k cells, and change at the rate of 1k cells per second. For real-time scenarios, this is large enough that an efficient strategy is needed.

At the lowest level, we need to represent the underlying geometry of the object defined by occupied cells within the grid. By considering the occupancy of a given cell along with its neighbors, we can come up with a reasonably natural asteroid surface:

Now we have to store this in a form that is efficient to modify and can be passed to a physics system. Here are some choices:

  • Build a mesh consisting of vertices and edges/triangles. Because we haven’t made any restrictions on shape, it could be concave, which would probably require the physics or collision system to decompose it into convex pieces.
  • Use simple convex meshes for parts of the object: These parts could be individual cells or clusters of cells. This way we take advantage of locality information obtained from the grid rather than forcing another system to decompose the object.
  • Use primitives such as spheres or rectangles for individual cells or cell clusters. Intersections are fast to calculate, but we may impose a greater burden on broad phase collision detection if many primitives are produced.

The first way is a general solution that could be implemented as marching squares/cubes or similar. However, with just a giant mesh to work with, collision detection cannot take advantage of any specialized information provided by the grid. These problems could be alleviated by partioning the grid and only generating meshes in regions of interest.

The second and third approaches are reasonable, but whether it’s more beneficial to use meshes or primitives probably depends on the specific geometry and the collision detection implementation. I’ve gone with the primitives approach, generated on the fly to drastically reduce the number of primitives pushed to the collision system. I’ll cover this in another post.

Triverse – Introduction

Triverse puts you in control of a fully constructible spacecraft, exploring a vast destructible 2D environment and obliterating your enemies to obtain parts for your growing fleet.

Here are some general goals:

  • Few parts: Each part should pull its own weight with behavior distinctive enough to offer varied gameplay and tradeoffs.
  • Unified parts/map: Everything should be destructible, whether it’s ships or terrain.
  • Minimal UI/controls: This is a tough one because I want to support casual players with only a mouse as well as hardcore players who want to play this as a shmup. I’ll continue to prototype to see what forms of gameplay are feasible under these limitations and evaluate how much common ground there is.
  • Avoid optimal/breaking builds: With so much flexibility, players will certainly try things I could never think of or intend. Hopefully balance is an achievable goal without watering down gameplay.

The gameplay is inspired by SubSpace. In particular, I’ve wanted to create a 2D space game incorporating a unified energy system and Newtonian physics. Constructing ships from parts comes from Master of Orion and the emergence of voxel-based games. Warning Forever and Captain Forever have had some influence on the visuals.

Triverse – Storing Parts

Triverse is a 2D space shooter with ships composed of primitive parts. Ships behave based on their composition, including physical effects of thrust, mass, and inertia. To that end, a central design question is how parts are obtained and stored. Here are some approaches I’ve considered:

  • Available parts floating in space: Players would drag and drop parts, or possibly select a type as a brush and “paint” regions of the ship with it. This solution is elegant in the sense that it requires no additional UI elements to support and maintains consistency in the notion that parts have physical properties. However, having many parts floating around space may present performance and usability problems at scale, and may be very challenging or unfeasible to implement in multiplayer. This approach works well for Captain Forever. I tested floating parts both in the foreground (not shown), which would collide, and in the background (shown below), which are intended as a separate layer that doesn’t interfere with physics.
  • Omnipresent inventory: Players would obtain parts in a central inventory not bound to any particular ship. This approach requires UI support, but makes construction convenient and would not present scale or multiplayer problems. However, it raises the question of where the parts actually exist, which may or may not matter for more of an arcade-style game where parts are akin to score. It’s a question of internal consistency: if parts exhibit physical properties when added to a ship, where are they? Certainly not stored “in” the ship, because then it would have greater mass. Perhaps we can explain it away with a dimensional hold of sorts (or Doraemonhold?), but then we have plenty of new questions to answer. This approach may also limit the gameplay potential that could arise from resource gathering activities, which are often a central theme of the space mining genre. Shown below is an inventory bar in a newer build, which would also need to indicate the quantity of each part type available.

So as usual, there’s no clear answer, and it really depends on what I want the game to be and what actually makes it fun. I’m leaning toward the second option and hoping it won’t limit gameplay or believability.

Unit Testing with Unity3D

First, a little background: Unity3D makes prototyping and tweaking easy, but it doesn’t offer much guidance in regard to unit testing. I typically try to exercise test-driven development (TDD) for non-prototype code or where the output is easily defined in a test. Otherwise, I’ll gradually add tests as I stabilize the code and want to ensure that it doesn’t break as I refactor or add new features. I find NUnit+ReSharper to be a great combo.

I also use unit tests for utility purposes because of the convenience in selectively executing them. For example, I have a variety of procedural map generation tests that write images or other data to a test folder for manual inspection. I can also use the output as actual assets for my game. These tests are not strictly unit tests, and I often use the [Ignore] attribute so I can just run them on demand and not when I execute my entire suite of tests. NUnit categories may also work for this. ReSharper is useful here because of its Visual Studio integration, making it much easier to run/debug tests (I think it’s better than the MSTest UI in VS 2010 as well).

All that said, Unity3D maintains a single Visual Studio solution and project for my project code, and of course it gets updated whenever files are added or removed. I prefer to maintain a second solution and set of projects that share common code that can be unit-tested. This arrangement also makes me less dependent on Unity3D in case I decide another platform is more suitable.

Here, I’ve got the csproj folder Simplex.Core, located within the Assets folder. In the root Simplex folder, I have a separate VS solution referencing this project:

I haven’t purchased Unity professional yet, so I don’t get certain features like scene serialization in text and whatever else would help with version control. So I stick to keeping only my code and art/sound assets in version control with a backup of metadata and scenes. The vast majority of changes occur in code after rather than scene/art/sound, so this arrangement works well enough for now.

To avoid dropping binaries into the Assets folder, I make sure to modify the project output paths (which doesn’t prevent VS from creating an empty obj folder):

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProductVersion>8.0.30703</ProductVersion>
    <SchemaVersion>2.0</SchemaVersion>
    <ProjectGuid>{C6A02F9D-764D-434F-A63C-418F2CAF03DF}</ProjectGuid>
    <OutputType>Library</OutputType>
    <AppDesignerFolder>Properties</AppDesignerFolder>
    <RootNamespace>Simplex.Core</RootNamespace>
    <AssemblyName>Simplex.Core</AssemblyName>
    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
    <FileAlignment>512</FileAlignment>
    <BaseIntermediateOutputPath>..\..\obj\</BaseIntermediateOutputPath>
  </PropertyGroup>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
    <DebugSymbols>true</DebugSymbols>
    <DebugType>full</DebugType>
    <Optimize>false</Optimize>
    <OutputPath>..\..\bin\Debug\</OutputPath>
    <DefineConstants>DEBUG;TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
  </PropertyGroup>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
    <DebugType>pdbonly</DebugType>
    <Optimize>true</Optimize>
    <OutputPath>..\..\bin\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
  </PropertyGroup>

Hexagonal Grid Visuals

Amorphi is a roguelike in which the player controls a not-unbloblike creature using simple rules of movement to evade and attack opponents on a hex grid. Creatures have no concept of hitpoints and effectively capture opponents similarly to chess. Creatures gradually expand their range of motion as they advance through levels. The idea provides plenty of design challenges, which I hope to expand on in the future:

  • Visualizing available moves: How can we show what moves are available to both the player and opponents?
  • Differentiating creatures: Do color and shape distinguish creature types at a glance? What about texture?
  • Differentiating opponents: How can we distinguish friend from foe? Use of color might conflict with the first point.
  • Line-of-sight: With one strike to capture the player, is LOS appropriate? Would casual players understand it? Should we target casual players at all?

Early on, I wanted to answer questions about what the game would/could look like according to my lack of artistic ability and the possibility of running the game on mobile platforms. If I’m limited in what aspects of the game I can visualize, the gameplay is limited as well for many potential players. In particular, I wanted to see whether line-of-sight made sense and whether it could exist with other visual cues such as move highlighting and terrain with limited detail. The following screenshots show early coloring and lighting tests. Since then, the visuals have made great progress, but I want to document the path to where they are now. I considered a few ways to mark unlit regions:

  • Decreased lightness: Probably the obvious way, but we might need more than this.
  • Decreased saturation: I originally thought of this to preserve the tile type, which might be indicated by hue.
  • Different hue: A dark, bluish hue might give the indication of an unlit area, but it means we’re using up an available color.

Ultimately it comes down to having a few vague ideas like this, then using prototyping tools to adjust parameters on the fly to get enough feedback to answer these design questions. Unity3D has been great in this regard.

Amorphi is influenced by hexagonal chess and the clever roguelike ChessRogue.