A particular feature that Unity is frequently praised for is its strength as a prototyping tool. This is a natural extension of Unity's underlying component structure. The way that Unity is structurally designed makes it very easy to cook up interactions quickly and efficiently. It is a very efficient tool for trying out different ideas and approaches without having to devote too much time to experimentation. Some studios will even use Unity for initial prototyping before porting their efforts over to another engine, simply because of the time they save.
This is where my personal story comes in. Yesterday, Thimbleweed Park was released. I've always been a massive fan of point-and-click adventure games, so the release of a title like this caused me to remember. A few years ago, I exprimented with creating a point-and-click adventure game with Adventure Game Studio. In particular, I experimented with using a 3D model as a basis for generating animated sprites. Animation is one of the biggest time-sinks in game development. 2D animation in particular can take a mind-numbing amount of time. I wanted to take a shortcut by creating and animating a 3D model, and then using that to generate a sprite sheet that I could use in the engine. (similar to how sprites where created for Donkey Kong Country)
My experiment was a success. I was able to create sprites that had far more detail and far more frames of animation than what had previously been possible. Most importantly, I was able to automatically render the sprites from multiple camera angles, which saved a huge amount of time. However, there were still some drawbacks to the approach, and while the sprites were more detailed, I didn't necessarily like the visual style that they presented.
When Thimbleweed Park came out, I saw the pixel-based sprites of the game, and thought back to the experiment that I had conducted. And I thought to myself, "with the tools I have available to me now, I bet I could come up with an even better solution." So I fired up Unity and took a crack at it.
Before Unity version 5 was released, some of the more advanced features of the engine were locked for the free version. But with version 5, all restrictions were lifted. One of those previously unavailable features was something called render textures. Render textures are very useful, and were most frequently used for certain special effects, most notably the implementation of mirrored surfaces, and in-game security camera footage. They basically let you take the output of a virtual camera, and stretch that rendered image across an object in the game. You are generally encouraged to use them sparingly. Since they require additional rendering to be performed, they can end up being a serious drain on a game's performance if you just throw them all over the place. But the drain they represent is directly proportional to the complexity of the scene you're rendering, and the size of the texture you are planning on rendering it to. If you can reduce either of those factors, you can reduce the cost of using render textures.
With access to this new and powerful tool, it occurred to me that I may be able to replicate my earlier sprite-rendering experiment, but this time I could do it in-game, and with much greater efficiency, and even a few extra features. The challenge I decided to set for myself was to simulate the style of classic 2D point-and-click adventure games, but do so with the efficiency of modern 3D modeling and animation. I wanted to create a quick prototype that would prove I could construct game assets in 3D, but render them in such a way that they would both look and move like classic 2D point-and-click sprites.
I began by cobbling together a basic human-style shape out of Unity cube objects that I scaled to roughly resember a standard human model. Then I stuffed them all under an empty game object so that I could treat them as a single unit. I created a new Layer, and assigned all of the cubes to this layer. Then I created a new camera, removed the audio listener component, and changed the rendering filter for the camera to only render objects assigned to my newly created layer. I created a render texture, set its resolution to roughly encompass the rendered shape of my human model, and set it as the render target for my new camera. Finally, I created a standard Quad object, and assigned the render texture to a material that I applied to the quad object.
This part of the experiment worked. The camera I created rendered the human-cubes to the render texture, which was in turn displayed on the 2D quad object. I was rendering a 3D object as a 2D sprite, and I was able to do it in real-time. Thanks to the display layer, I could insure that my render-camera wouldn't acknowledge anything else in the scene, only rendering the 3D "character." By adjusting the settings for the render texture, I could simulate the lower-resolution pixel-based style of older adventure titles.
But one problem still plagued me. If you moved the camera to a different angle, the render texture would be updated at the same rate that the game was running at. This would create much smoother animation than what was possbile in older games. (outside of a ridiculous budget) I needed a way to insure that rendered animation that went to the render texture could be throttled back to a more modest rate. Everything I had done up to this point was purely in the standard Unity visual interface. But there are no proper tools in Unity for this kind of specialized functionality. It was time for me to crack open the code editor.
I created a basic Unity script, and fired up Visual Studio. A quick trip to the Unity API documentation showed me everything I needed to know. In order to limit rendering for a render-texture, all you have to do is disable the camera that is doing the rendering, and then call that camera's render function manually. In the script's start function, I accessed the camera component, disabled it, and then called it's rendering function once. Then I defined a couple of variables for my script, including a float for my desired frame-rate, a float to keep track of time that has passed, and a boolean to determine whether or not I want the rendering to occur. In the scripts update function, I added a conditional to check if my rendering boolean was true, and then some conditionals to check if the current delta time added to the current elapsed time was greater than 1 divided by the desired framerate. If that condition was true, the elapsed variable would be reset to zero, and the camera's render function would be called.
When I dropped my newly created component script on the render camera, it worked like a charm. When I tested the scene the render camera would start off disabled, and would take a single snapshot of the human-cubes just to populate the render texture with something. If I switched on the rendering flag in the script, the render camera would start rendering again, but only at the rate I had set in the script. (which was now available for editing in the visual inspector) Not only could I render my sprites at lower framerates, but I could dynamically change those framerates at run-time. (just in case a particular effect or sequence called for it)
All of this took me less than two hours. And the coding for this feature was minimal, I barely had to dig into the more advanced implementations at all. This is one of the primary strengths of Unity. It allows you to cook up and test functionality very quickly.