formats
Published on April 11, 2014, by
AI Structure
In order for the AI to make decisions it needs to have information about the world around it.  There are several different ways for this information to be conveyed, but because we chose a method that makes it easy to have one programmer dedicated to the AI while the other works on the world.
We created an abstract PlayerAI class that can be subclassed, filling in the gaps for each kind of AI.  This base class does all of the common dirty work of gathering up the information so the subclass just needs to react to it.
For example, whenever a player is added, the PlayerAI calls an empty function passing that player as a parameter.  Any subclass can then implement this function in any way it sees fit, or not at all.  For example, one AI type wants to abandon its current target if a new player is a better target so it implements the playerCreated function to make that comparison.  Another AI type may be fixated on a target and not need to do anything if new players are added so it leaves this function blank.
The PlayerAI base class gets its information by hooking into various events from the PlayerManager, GameManager, and Map.  Early on, we compiled a list of all of the events we thought would be useful and wired them into the architecture.  Even now, events are sometimes modified, added, or removed as the codebase evolves, but have a starting list of events allows the AI programmer to handle most situations.
Path Finding
 
There are a handful of pathfinding algorithms out there, especially for 2D games.  Sine the number of possible paths from A to B grows exponentially with map size, most of these are optimized to return the first direct route.  One of the things we needed, however, was the ability for AI units to evaluate all possible paths from one location to another rather than just one.
In order to facilitate this, our pathfinding looks at our map as a collection of rooms rather than tiles.  Rooms serve as nodes rather than the individual tiles, so the pathfinder only needs to evaluate about 25 nodes or so, compared to 480 possible tiles in a 30×16 map.
One issue we ran into in the first iteration of the pathfinding was that despite this, if two rooms were far enough apart, the number of possible paths was still in the thousands, which is far too many to reasonably parse.  What we noticed was that there were often more than 2 doors connecting two rooms, and for each additional door the number of paths doubled.  So instead of giving the pathfinding a path through each doorway, a single path is created and the room itself stores the various routes to the other rooms it’s connected to.
For example, when asking for a path from Room A to Room F, the pathfinder may return this path { A, C, D, F } and this path { A, B, D, F }.  The AI can then evaluate each path based on what’s happening in the rooms along each path and pick one.  Once one is picked, it can then evaluate how to get from A to B, and only once it reaches B does it need to figure out the best way to get from B to D.  And at this point, it may decide to abandon its current task and pick a new route altogether.
pathfinding
Here is an example of 4 paths of 28 generated. The green line indicates its path (while going to the center of each room). The red squares indicate the rooms needed to be entered to complete the objective while the yellow squares are rooms that are ignored.
Our old AI test video if you missed it shows the AI moving, now it fires when another player (or AI) enters the room. We also have some classes set up and should be shown in our next update/video.
Main Menu
We’ve also spent the last few weeks working on our menu. Like most UIs, it feels like making another game entirely with the amount of time and work needed for it to function properly.
Our goals are to reduce the amount button presses to get into the game while providing the player with many customization options which makes this a great balancing act. We’re also coming up with a variety of systems to build the menu modularly to change or add additional menu options. Currently the menu isn’t how we want it graphically but its functionality is there so we’ve not included it into our game loop.
mainmenu02
The menu is independent of the background so it can be placed on top of any level. We may have a background specifically for the main menu or have it use the previous level’s background. With the look of it still under huge development, anything is possible!
 
formats
Published on April 7, 2014, by

The AI finds navigational paths and weapon collectables and begins its movement maintaining equal distance from the wall colliders. Once they find a weapon they fire all the shots immediately (since posting this video, they now only fire if someone is in the same room). They don’t currently find weapons that are necessarily closer, but that’s to be worked on in future iterations.

 

We’ve also improved how our level generation works.
Before it generated a basic tiled map and colliders which you can see here:

lightbound-march05

Maps are now constructed on a grid using a subdivision algorithm with some custom rules to guide the process.  Subdivision guarantees that every room is accessible to every other room.  Throughout this process, each square is marked as either a wall, a door, or a room.  Once the map is done being generated, any square with a wall has a box collider dropped in its location.
Recently, I evolved this method with two goals in mind: 1) Reduce the number of colliders to increase performance and eliminate seams.2) Build thinner walls to make rooms feel bigger and reduce wasted floor space.

To do this, I create polygon colliders using a walk around technique.  This basically is how the algorithm works:

  1. Pick a wall square.  As long as the square to the left is a wall, move left.   As long as the square below is a wall, move down.  Repeat until you have found a wall square that doesn’t have walls left of or below it.  This ensures you can safely put your starting point in the bottom left corner and start by walking up.
  2. Drop a point in the bottom left corner and start moving up.
  3. If the square to your left is a wall, drop a point in the bottom left corner of your current square and turn to the left; go to step 5.
  4. If the square in front of you is empty, drop a point in the top left corner of your current square and turn to the right; go to step 5.
  5. Move forward, repeating the process in steps 3 and 4 until you end up back where you started.  As you move along, flag the squares you’ve visited so that you don’t create more than one collider for each wall.  Remember that points and wall checks are done relative to the direction you’re facing.
walk_around

 

Wall thickness is applied when dropping the points of the collider.  Rather than dropping the points in the corner of each square, an offset is used based on the thickness of the desired wall.  For example, if you want walls to be 0.5m thick instead of 1m, the bottom left point would be at (-0.25, -0.25) instead of (-0.5, -0.5).

walk_around_wall_thickness

 

Thanks for reading!

You can follow us on Facebook or Twitter for more updates!
Thanks for reading!

 
 
formats
Published on January 27, 2014, by

We attended Toronto Global Game Jam which is part of the larger Global Game Jam. It’s a weekend event where everyone comes together to make a game in a weekend. Michael and I got together and hammered out a game we call Light Bound in less than 36 hours! We also recorded a time-lapse that we’ll be posting as soon as it’s available but until then we’ll be recovering from the weekend!

title_42

Gameplay Video:

Screenshots: (Watch video! Screenshots don’t give it justice!)

light_bound_2014-01-26_15-14-39-35_0

light_bound_2014-01-26_15-14-03-92_0

light_bound_2014-01-26_15-13-55-32_0

 
 
 
formats
Published on August 2, 2013, by

This article is going to focus on how to get your Unity game running as fast as possible on mobile devices, specifically iPhone but you can carry over techniques to Android as well. This is something I find a lot of people have issues with, their game running at terrible frame rates and not understanding why or what they can do about it! iPhone’s hardware isn’t that beefy which makes optimization much more important! Squeezing visual fidelity without suffering game play is the challenge.

Take a load off Culling… I got this! (Backface Culling)

Since this example is on a 2.5D side-scroller we have 3d models that we could let backface culling take care of reducing unseen polys but why? Let’s just not model what isn’t seen and save texture resolution at the same time! This could be used for several uses such as buildings, walls or any background pieces where you never see the back. Most importantly this saves a lot of that sweet never-have-enough texture space! I wouldn’t really recommend this method in a scenario where you can walk around in 3D space and manipulate objects

nobacks

Hold my calls! (Draw Calls)

This is one of the leading causes of terrible frame rate-icitis aside from poly limits being exceeded. A draw call is issued from the GPU to render  a texture to display on screen which also affects the CPU depending on how often this happens. When making a game for your desktop you can get away with a lot. Not for mobile, this is where having a really low amount of draw calls means everything! If you’re curious where your game sits use the stats button on the top right corner of your Game window.

But James! How do I reduce my draw calls? They are much too high! I’m glad you asked!

That GUI could be to blame. If you’re making your GUI with multiple textures such as a 200×75 play button (1 draw call) with another 200×75 texture for a option button, WRONG! If you’re making one texture for each prop, WRONG AGAIN!

You’ll want to bunch everything into a Texture Atlas. There is no reason to have more than 1 texture for your GUI elements.  Using a Sprite manager is the best way to get your GUI working in 1 draw call. Otherwise a play, options, quit button would take at least 3 draw calls already. You can lump props into 1024×1024 or 2048×2048 (if hardware permits) texture atlases . So you can have several props take 1 draw call. You can still have animated buttons on an atlas you would just include all the frames and reference their x,y,height,width  locations. You can even color these assets and create different effects for them.

drawcalls

GUITexture/OnGUI has never been iPhone friendly so if you can I suggest building your own from the ground up otherwise you’re going to have a bad time. There are so many other alternatives out there that offer better results. For example, we’re using a a second camera  with a separate layer that only shows GUI elements (cubes) orthographically with the texture atlas material .

Avoid using multi-texture materials. This means spec, bump, normal maps, etc. You can include most of these effects in the diffuse maps, but face it…it’s mobile, people know what they’re getting into!

Compress them textures!

Let’s make it easier for the hardware, It’s been through enough!  You’ll want to compress all textures to PVRTC. To do this simply go to the texture (not material!) and in the inspector Texture Type > Advanced. Then change Format (at bottom) to RGBA Compressed PVRTC 4 bits (or use RGB  if it doesn’t require alpha). Compressing textures use significantly less memory bandwidth, here is a chart to show the difference.

Compression                                               Memory consumption (bytes/pixel)
RGB Compressed PVRTC 2 bits                                     0.25 bpp
RGBA Compressed PVRTC 2 bits                                  0.25 bpp
RGB Compressed PVRTC 4 bits                                       0.5 bpp
RGBA Compressed PVRTC 4 bits                                    0.5 bpp
RGB 16bit                                                                            2 bpp
RGB 24bit                                                                            3 bpp
Alpha 8bit                                                                            1 bpp
RGBA 16bit                                                                         2 bpp
RGBA 32bit                                                                         4 bpp

So you can see from RGBA 32bit to RGBA Compressed PVRTC 2 bits there is a HUGE difference. 3.75 bpp to be exact! If you have a texture that is 960×640 the RGBA 32 bit would be 2457600 bpp or RGBA Compressed PVRTC 2 bits would be 153600 bpp. That is a difference of 2304000 bpp. There is a big difference for this compression it’s essentially 16x less!

What about quality? Surely you will have to compromise here! yes and no..Here is an example of the compression methods

compressioncompare

As you can see there is a slight quality loss but if we zoom out…

compressioncompare2
It’s negligible as the screen is also small, even on an iPad you probably wouldn’t notice.

From my experience though, PVRTC is usually not the best solution for UI and in some cases even as textures. This is something you’ll want to experiment with and basically means faster load times on iOS devices.

Useful things to consider:

Destroy? Nah…Recycle!
Our programmer has informed me how terrible unity’s garbage collector is and that you should never destroy assets as it takes a huge performance hit. He implemented a clever way to get around this. Basically when something runs off screen it disables all its properties and  throws it into a queue with one of his many managers and waits to be reused as a new asset and re-enables all the properties again. So we use this for enemies, we never kill or instantiate while the game is running, instantiating objects happens at the start of the game while it loads and waits to be called.

Scroll that texture!
Making a platformer? Have 1 background texture? scroll that baby! No need to have the same texture swapping 2 of the same assets to loop it. have multiple? Create a background texture atlas then so you can parallax it!

scrolling

Polycounts!
This is hardware specific, it seems the iPhone 3GS is capable of around 10k-15k tris, this would lead me to believe that the iPhone 4 would provide a better performance around 20k-30k. Anything newer should be capable of much more than this and you’re more lax, but ideally you’ll want to aim the lowest possible if you are trying to reach older devices. You can go over these mind you but it depends on a lot of factors on how well you optimized it such as draw calls and texture sizes. The lower never being recommend and even lower is a plus! The less strain on the hardware the better battery life you’re going to offer he player and nobody wants to play a game that drains their battery when they are on the go.

Alpha
Avoid this whenever possible as they can be more expensive to calculate. This includes fog, particles, textures. This isn’t to say don’t use them but if you’re have a performance crisis you may want to investigate these options first. Alpha in textures are hard to avoid especially since most of the time it saves you on your polycount…just try to be clever.