Packaging Unity Games for the Mac App Store

The Mac App Store is slated to launch on January 6th, with Apple already accepting submissions for applications.  The signing process for Xcode projects is fairly similar to their iPhone workflow.  But how do sign an external application, one you don’t have source code for?  Let’s take a look:

Things You Will Need

  • The latest version of Xcode
  • A Mac Dev Center account ($99/year)
  • Unity
  • Some technical savvy (this isn’t a complete handholding, just notable stuff)

Certificates

To submit an application to the Mac App Store, you need two signatures.  A web wizard will walk you through creating these certificates when you first access the Developer Certificate Utility:

Note that if you intend to submit applications from multiple computers, you should export the private key for each certificate.  Select the certificates and export from Keychain (it will prompt for a password to protect the resulting .p12 file):

Application Changes

First, build from Unity.  You must select Intel-only as your architecture; this is a requirement of the App store.  Once you have your .app bundle, you need to make a few changes:

Icon

Make sure you use a custom icon for your application.  Unity should be able to do this for you now.  We set up custom icons in our build pipeline before Unity added support (via the makeicns utility, which turns a 512×512 PNG into a proper .icns file for us).

Info.plist

Every Mac application contains an Info.plist with things like version information, copyright text, etc.  The Mac App Store requires a few changes and additions to the Info.plist after Unity builds your app.  You can edit these by hand (right-click your application, show contents, find Contents/Info.plist, double-click to edit with Property List Editor, or simply edit manually in any old text editor).

We use a PostprocessBuildPlayer script, written in PHP, that modifies our Info.plist here.  Here’s what we’re doing (and why)–note that the variables here are actual variables from our PHP script:

Our own icon file.  You probably won’t need to change this:

"CFBundleIconFile", "icon.icns"

Replaces the Unity version with our own, looks like: “Blush version 1.0.10.5123″.  The last version number is actually our current Asset Server revision number:

"CFBundleGetInfoString","$game version $versionExpanded"

Replaces the Unity number with ours:

"CFBundleShortVersionString", "$version"

Same replacement, different variable:

"CFBundleVersion", $version

Apples requires you set the bundle identifier to match the app ID you set in the Mac Dev Center.  In our case, this is something like com.flashbangstudios.blush:

"CFBundleIdentifier", $identifier

This is just the tail end of our identifier–ie just “blush”.  You may encounter an error when uploading your app if the name includes spaces:

"CFBundleName", $shortIdentifier

New key.  Apple provides a list of genres–this plist entry must match the primary category type you set when uploading your application:

"LSApplicationCategoryType", "public.app-category.arcade-games"

Apple requires this key, which isn’t present by default in Unity builds.  Ours looks something like: “Blush v1.0.10.5123 (c) Flashbang Studios, LLC”:

"NSHumanReadableCopyright", "$game v$versionExpanded $copy"

Fixing Permissions

Unity builds have improper permissions set on the Contents/Data folder in the app bundle.  The installer packaging only looks at the everyone/other permission flag, which isn’t set.  This means the folder will be unreadable after packaging and subsequent installation, which causes a crash.

You can fix this manually by changing the Contents/Data folder to readable by everyone, or use a command line fix:

chmod -R a+xr "/path/to/Your Game.app"

Certificate Signing

Once you have modified your application, you’ll need to sign the .app bundle with the application certificate from the Mac Dev Center.  These tools are available in the “Application Utilities” download.

The command to sign an application looks like:

codesign -f -v -s "3rd Party Mac Developer Application: Flashbang Studios, LLC" "/path/to/Your Game.app"

Once your application is signed, you must build an installer package.  This looks like:

productbuild --component "/path/to/Your Game.app" "/Applications" --sign "3rd Party Mac Developer Installer: Flashbang Studios, LLC" "/path/to/output/game.pkg"

That’s it!  You only upload the resulting .pkg file.

Compliance

Apple has stringent technical requirements for applications in the Mac App Store.  It appears that Unity is compliant, with the sole exception that it writes PlayerPrefs data into ~Library/Preferences/com.your.application.identifier.  This is a sane location, so hopefully Apple won’t reject based on this alone.

Also, we aren’t checking installation receipts–verifying checking Apple’s DRM–which appears to require intervention from Unity themselves (your application must check receipts and shut down before showing any UI, and any plugins/code you could write would execute after Unity has already displayed something).

Update: Two fixes are required for Unity games build with 2.6.x, as mentioned on this thread:

For Unity 2.6.x users:
a) Jonas already posted updated Unity Player that addresses “thread_stack_pcs” problem. You can download it here: http://files.unity3d.com/jonas/UnityPlayer.zip
b) Also we prepared mono dynamic library update that addresses “~/.wapi” problem. You can download it here: http://files.unity3d.com/mantas/libmono.0.dylib.zip

NOTE: 2.6.x fixes are beta quality code and you need carefully test your game for regressions.
Short update instructions after you made your final app build with Unity Editor:
1) Extract UnityPlayer.zip. There will be UnityPlayer binary. Copy it over <YourFinalAppName>.app/Contents/MacOS/<binary found there>
2) Extract libmono.0.dylib.zip. There will be libmono.0.dylib binary. Copy it over <YourFinalAppName>.app/Contents/Frameworks/Mono.framework/libmono.0.dylib

There is a fixed build available for Unity 3.x, too, although it is currently in the beta cycle.  Contact Unity if you need to get a hold of it!

Good Luck!

Despite this article’s length, there really isn’t much to it:  Edit some plist keys, sign your application, build the installer package, and upload to the Mac App Store.  Best of luck to everyone submitting their games!

Posted in Tutorials | 5 Comments

“High-Speed, Off-Screen Particles” in Unity

After running into fill-rate problems with dust/dirt particles in Raptor Safari we decided to implement High-Speed, Off-Screen Particles, as outlined by NVIDIA’s GPU Gems 3. Without getting too technical, the article shows how to render particles into a smaller render target (RenderTexture in Unity) and then blend the particles back into screen. This works well for dust, dirt, and smoke like particles because they have low frequency textures. This low frequency masks the natural “blurring” that occurs from upscaling the smaller render target to the screen’s resolution.

Rendering Depth

First thing we’ll need is a depth buffer. By rendering to a separate render texture we can no longer take advantage of the GPU’s built-in z-testing. Having a depth buffer will allow us to do our own z-testing. Unity made it stupid easy to get a depth buffer of the scene with version 2.6.0. On the main camera run this line somewhere in Awake:

this.camera.depthTextureMode = DepthTextureMode.Depth;

However, for our purposes the depth buffer given to us by Unity is a bit overkill. Unity will render everything that the main camera can see. We only have dust-like particles near the camera, so there’s no reason for Unity to render depth information at far distances. So instead of taking advantage of Unity’s one-liner solution, we render our own depth buffer with a different far-clip plane. This isn’t as straight forward as setting the camera’s far-clip plane and is slightly outside of this article’s scope. We’ll address this in a future article.

Rendering the Particles

Here comes the hard part. First we’ll outline the steps we need to take to render the particles. All of this happens in the camera’s OnRenderImage function.

  1. Create/Setup the render texture that we will render our particles into
  2. Create/Setup the camera that will render our particles
  3. Render the particles with a replacement shader
  4. Blend the particles back into the screen with a composite shader

First we’ll create and setup the render texture that will hold our particles.

// get the downsample factor
var downsampleFactor:int = this.offScreenParticlesOptions.downsampleFactor;
      
// create the off-screen particles texture
var particlesRT:RenderTexture = RenderTexture.GetTemporary(Screen.width / downsampleFactor, Screen.height / downsampleFactor, 0);

The downsampleFactor determines the quality of the off-screen particles. Higher numbers will give worse quality, but better performance.

Next, we’ll create and setup the camera.

var ppCamera:Camera = PostProcessingHelper.GetPPCamera();
ppCamera.depthTextureMode = DepthTextureMode.None;
ppCamera.cullingMask = this.offScreenParticlesOptions.layerMask.value;
ppCamera.targetTexture = particlesRT;
ppCamera.clearFlags = CameraClearFlags.SolidColor;
ppCamera.backgroundColor = Color.black;

And the PostProcessingHelper.GetPPCamera() function:

private static var ppCameraGO:GameObject;

static function GetPPCamera():Camera
{
   // Create the shader camera if it doesn’t exist yet
   if(!ppCameraGO) {
      ppCameraGO = new GameObject(“Post Processing Camera”, Camera);
      ppCameraGO.camera.enabled = false;
      ppCameraGO.hideFlags = HideFlags.HideAndDontSave;
   }

   return ppCameraGO.camera;
}

Notice how we are setting the layer that the particles are on. This is how the camera determines which renderers in the scene are the particles that we wish to render off-screen.

Next we render the actual particles. Telling the camera to render is easy enough.

Shader.SetGlobalVector(“_CameraDepthTexture_Size”, Vector4(this.camera.pixelWidth, this.camera.pixelHeight, 0.0, 0.0)); // some data about the depth buffer we need to send the shaders
depthCamera.RenderWithShader(Shader.Find(“Hidden/Off-Screen Particles Replace”), “RenderType”);

The replacement shader is a bit unwieldy, so here it is as a file. Don’t worry too much about what’s going on in the replacement shader. Just make sure to place this shader in a Resources folder.

Lastly, we blend the particles back into the scene.

var blendMaterial:Material = PostProcessingHelper.GetMaterial(Shader.Find(“Hidden/Off-Screen Particles Composite”));
var texelOffset:Vector2 = Vector2.Scale(source.GetTexelOffset(), Vector2(source.width, source.height));
Graphics.BlitMultiTap(particlesRT, source, blendMaterial, texelOffset);

And the Composite shader (again, place this in a Resources folder):

Shader “Hidden/Off-Screen Particles Composite” {
Properties {
_MainTex (“Base (RGB)”, RECT) = “white” {}
}
SubShader {
Pass {
ZTest Always Cull Off ZWrite Off Fog { Mode Off }
Blend One SrcAlpha
SetTexture[_MainTex] {combine texture}
}
}
Fallback Off
}

Don’t forget to release the particles render texture!

RenderTexture.ReleaseTemporary(particlesRT);

And after you finish doing any other post processing effects you may be doing, output to the destination RenderTexture:

Graphics.Blit(source, destination);

PRETTY PICTURES!

Click for bigger images. These all should be pixel perfect if you want to flip through or diff them.

Off-Screen Particles Disabled

Off-Screen Particles Disabled


Off-Screen Particles Enabled

Off-Screen Particles Enabled


Off-Screen Particles RGB

Off-Screen Particles RGB


Off-Screen Particles Alpha

Off-Screen Particles Alpha

Notes

Separate Alpha Blend Function: As of Unity 2.6.1, there is no way to blend alpha channels with a different blending function. So this replacement shader takes two passes on the particles. Once to render to the RGB channel and once to render to the Alpha channel. Apparently, this will be fixed in a future release of Unity and the separate pass will not be necessary. I’ll update this replacement shader if someone reminds me when that happens. Update: Looks like this can be accomplished with a single blending function if you premultiply the alpha into the rgb channel in the pixel shader. The new blending function for rendering the particles is One OneMinusSrcAlpha. The new blending function for blending the particle RT back into the screen buffer is One OneMinusSrcAlpha. And your particle RT will need to be cleared to (0,0,0,0) instead of (0,0,0,1).

Mixed Resolution: We decided that mixed resolution particles wasn’t necessary for us. The scene is too fast moving to notice the depth sampling artifacts. Plus the performance overhead from needing a second pass for the alpha channel made mixed resolution rendering just too expensive.

Soft Particles: Soft Particles are extremely easy to implement with the off-screen particles, but we didn’t see much of a difference in the final render. We decided to just use discard instead of soft particles in the end.

Anti-aliasing: This is untested with Anti-aliasing on directx. I really doubt it’ll work correctly as is. Shouldn’t be too difficult to get it to work though.

Posted in Graphics | Tagged , , | 2 Comments

Removing White Halos in Transparent Textures

Unity allows native importing of PSD textures, which is awesome for workflow and iteration speeds. However, transparent textures frequently exhibit white “halos” around objects. This issue stems from Photoshop itself–fully transparent pixels are usually white in color, which means texture sampling will pick up these white pixels when your texture is scaled down.

Here is an automated solution to this problem (you’ll probably want to watch full-size on Vimeo):

You’ll need Flaming Pear’s free Solidify plugin and this set of Photoshop Actions. Enjoy!

Posted in Tools | 6 Comments

JPEG Encoding in Native Unity JavaScript

A few months ago someone posted to the Unity forums about how to encode a JPEG file without writing to disk.  It piqued my curiosity, so I poked at it.  It turned out that the required .NET libraries aren’t available in Unity (likely for size reasons, but maybe it’s a Mono thing). Bummer.

Native Encoding

The solution to this problem is to implement a JPEG encoder in native C# or JavaScript in Unity.  The other day I saw someone link to a browser JavaScript implementation of JPEG encoding, which was based off the as3corelib implementation from Flash.  Eureka!  It looked like an easy port, and an interesting exercise, so I went ahead and brought it over Unity JavaScript.

Using the JPEG Encoder

Usage is quite simple:

// spin up the JPEG encoder
var encoder:JPGEncoder = new JPGEncoder(texture, 75.0);

// encoder is threaded; wait for it to finish
while(!encoder.isDone)
   yield;

// do something with bytes (like WWW upload or save to disk)
var bytes:byte[] = encoder.GetBytes();

The encoder is threaded, so the calling script needs to yield until it’s complete. Warning: We have never actually used threads in Unity; this might totally blow up in your face! If that’s the case it’d be easy to pseudo-thread it with a few yield statements (or just run it straight in circumstances where a frame hitch isn’t a big deal–capture the screen and hold on to the image bits until the player is at a good static GUI screen or something).

Possible Uses

Why would you want such a thing? Perhaps you want to allow the player to take screenshots from a standalone build, saved to disk. Or perhaps you want to upload images to your server (Unity has an EncodeToPNG function, but they can be quite large for network transmission). Splume, one of our first Unity games, had a level editor that simply uploaded game screenshots as thumbnails:

Splume 1 Splume 2

We also uploaded screenshots when something big would happen, like a huge scoring combo.

Download the Encoder and Sample Project

Download a simple project that takes screenshots and outputs .JPG files. Just drop JPGEncoder.js into your own projects to use native JPEG encoding. Enjoy!

Posted in Scripts | 1 Comment

Unity Asset Server Browser

Unity’s Asset Server is a key piece of technology in our workflow.  It provides per-asset source control inside of Unity, which is absolutely required for larger projects with multiple team members.  Unity 2.5 introduced in-Unity browsing of the asset server–update history, asset history, etc–but we still use our own web-based browser today.  It’s a convenient way to quick check an asset or update without launching Unity and opening your project. Searching is also much easier, as our browser has a page with all updates and commit descriptions.

Here’s our browser in action (or watch in HD):

We’re releasing it!  Download the source code or check out the README.txt file.  If you find this useful feel free to donate beer money:  paypal@blurst.com!

Posted in Backend/Database | Tagged | 6 Comments

Getting Tricky with Triggers

Unity triggers are insanely useful.  If you aren’t using physics triggers–even if your game is seemingly non-physical–you’re missing out on some of the most powerful functionality in Unity.  Let’s take a look:

What’s a Trigger?

A trigger is a Collider that fires events as other Colliders enter and leave its collision volume, but without physically interacting with your scene. They are physically “invisible”.  If a Rigidbody “hits” a trigger, it passes right through, but the trigger produces OnTriggerEnter calls, when the object enters, OnTriggerStay calls, for as long as the object is inside the trigger, and OnTriggerExit calls, for when the object leaves the trigger.

Take a look at the Unity collider documentation for a more verbose explanation, as well as the collision chart for when collision events are called.

Where to Use Triggers?

We use triggers as:

Static Proximity Triggers

The most obvious trigger example is to use them as, well, actual proximity triggers. Some piece of game logic is run when the player, or another object, reaches a point in space.  You could place a trigger in front of a door which causes the door to open as the player approaches.  You can easily test the entered Collider for some layer, tag, or Component to see if the trigger should execute.  This is as sample as:

function OnTriggerEnter(other:Collider)
{
   // only process if we’re the player
   if(!other.CompareTag(“Player”))
      return;
   
   // does the player have the blue key?  if not, boo
   if(!Player.HasKey(“blue”))
      return;
      
   // actually open the door!
}

Radius Triggers

The first trick with triggers is to realize they can move. You can set a trigger as a child of an object–even another physical object like a Rigidbody–and then use the trigger to take action as other objects approach or exit a radius around your object of interest.

Take the use case of spawn points, for instance; your goal is to trigger an enemy spawn as a spawn point nears your player. You could place a script on your player that iterates all possible spawn points and spawns something if it’s near enough:

function FixedUpdate()
{
for(var spawn:SpawnPoint in allSpawns)
   if(Vector3.Distance(transform.position, spawn.transform.position < spawnRadius)
      spawn.DoSpawn();
}

The #1 reason for using triggers, over this “manual” method, is that triggers will be faster. Much faster! There are ways to make this example better–check the sqrMagnitude of the distance to avoid a square root, cache your Transform, etc–but the crux of the problem remains: You have to run this code every Update, FixedUpdate, or at least with enough frequency to respond interactively.

However, the physics engine is already checking for trigger collisions. It’s doing the work for you! Let it do its super-fast optimized thing; believe me, you can’t compete with PhysX on an optimization level. Instead of polling for spawn points in our radius ourselves, let the physics engine send events only when something enters our player radius trigger. This becomes:

function OnTriggerEnter(other:Collider)
{
   var spawn:SpawnPoint = other.GetComponent(SpawnPoint);
   
   if(spawn)
      spawn.DoSpawn();
}

Create a gigantic trigger as a child of your player object, place this script on it, and voila! To prevent collision with your physical gameplay objects, your spawn colliders should also be triggers (yes, triggers send OnTrigger* messages when other triggers enter and exit).

Warning: Trigger colliders do respond to raycasts. You should make sure your triggers are set to the Ignore Raycasts layer if your game logic uses raycasts (or are otherwise ignored by your raycast LayerMasks).

What’s In This Space?

Another great use of triggers is to the answer a question for your game logic: What’s in this space? Perhaps you want your spawn points to fire only if they’re empty, or you want to see if two objects are near enough to count for something.

Here’s a concerete example: One of our games, Crane Wars, is a stacking game. In order to determine if different building pieces are stacked on top of one another, and should count as one building, we use triggers:

The slim box outlines on the top of the building pieces are the triggers. White indicates that nothing is in the trigger, and green–the bottom two floors of the stack–indicate that another building piece is in the trigger and is considered the “upstairs” piece in the sequence. The light blue box is a spawn point; it will spawn another piece as soon as it is empty.

In order to facilitate this game logic, we have generic scripts that track objects as they enter and leave a trigger. Unfortunately, there is no function available like collider.GetCollidersInsideTrigger(). Fear not, though, it’s fairly easy logic. The following script will track all objects that enter, and store them in a list based on their layer:

#pragma strict
@script AddComponentMenu(“Library/Physics/Trigger Track By Layer”)

/**
* Keep a list of all objects inside this trigger, available by layer
*
* Layer -1 is a list of all objects
*/

// should we track triggers too?
var trackOtherTriggers:boolean = false;

// objects we aren’t tracking
var ignoreColliders:Collider[];

// hashtable of arraylists for our objects
private var objects:Hashtable = new Hashtable();

/**
* Initialize internals
*/

function Awake()
{
   // -1 is all layers
   objects[-1] = new ArrayList();
}

/**
* Return an arraylist for a layer–if no layer, just make an empty one
*/

function GetObjects(layer:int):ArrayList
{
   var list:ArrayList = objects[layer];
   
   // scrub any nulls when scripts request a list
   if(list)
   {
      Helpers.ClearArrayListNulls(list);
      return list;
   }
   else
      return new ArrayList();
}

/**
* Record when objects enter
*/

function OnTriggerEnter(other:Collider)
{
   if(!trackOtherTriggers && other.isTrigger)
      return
   
   // is this collider in our ignore list?
   for(var testIgnore:Collider in ignoreColliders)
      if(testIgnore == other)
         return;
   
   var go:GameObject = other.gameObject;
   var layer:int = go.layer;
   
   // create our list if none exists
   if(!objects[layer])
      objects[layer] = new ArrayList();
      
   var list:ArrayList = objects[layer];
   
   // add if not present
   if(!list.Contains(go))
      list.Add(go);
      
   // add to all
   list = objects[-1];
   if(!list.Contains(go))
      list.Add(go);
}

/**
* Update when objects leave
*/

function OnTriggerExit(other:Collider)
{
   if(!trackOtherTriggers && other.isTrigger)
      return
   
   // is this collider in our ignore list?
   for(var testIgnore:Collider in ignoreColliders)
      if(testIgnore == other)
         return
   
   var go:GameObject = other.gameObject;
   var layer:int = go.layer;
   
   
   // remove from all
   var list:ArrayList = objects[-1];
   list.Remove(go);
   
   // Remove from layer’s list if it’s present
   if(objects[layer])
   {
      list = objects[layer];
      list.Remove(go);
   }
}

/**
* Spew our list as an overlay in debug mode
*/

function OnGUI()
{
   if(!Global.use.debug)
      return;
   
   var debug:String = “”;
   
   for(var de:DictionaryEntry in objects)
   {
      var list:ArrayList = de.Value;
      var layer:int = de.Key;
      
      if(layer == -1)
         continue;
      
      debug += String.Format(“{0} : {1}\n, LayerMask.LayerToName(de.Key), list.Count);
   }
   
   var screen:Vector3 = Camera.main.WorldToScreenPoint(transform.position);
   GUI.contentColor = Color.red;
   GUI.Label(new Rect(screen.x, Screen.height – screen.y, 256, 128), debug);
}

An object destroyed inside of a trigger will not send OnTriggerExit events, so we must scrub our list of any null values before returning it.

Rather than poll this list continuously, our game logic script only checks for an update when things may have changed–when something enters or exits our trigger (the same trigger, as both the generic tracker script and our game logic script are on the same object). We separate scripts like this in order to keep the tracking script generic and reusable between projects. It is easy for our gameplay script to ask the tracker script for all of the objects in our BuildingPieces layer, for instance:

// should we do a calculation next update?
var isDirty:boolean = false;

// cache our tracker reference
private var tracker:TriggerTrackByLayer;
tracker = GetComponent(TriggerTrackByLayer);

/**
* Recalculate status every object
*/

function OnTriggerEnter()
{ 
   isDirty = true;
}
function OnTriggerExit()
{
   isDirty = true;
}

/**
* Check for dirty every FixedUpdate (after OnTrigger* calls)
*/

function FixedUpdate()
{
   if(isDirty)
   {
      // do our game logic, ie:
      var objectsInside = tracker.GetObjects(someLayerNumber);
   }
   
   // unset to prevent logic every frame
   isDirty = false;
}

We use another variant that only tracks objects that match a certain LayerMask, which lets you avoid the overhead of tracking all objects in and out. The example project includes both scripts.

By the way, visualizing the status of your triggers via Gizmos is a very useful debugging tool. In the crane wars example you can draw box colliders like:

/**
* Debug display of our trigger state
*/

function OnDrawGizmos()
{
   // set to whatever color you want to represent
   Gizmos.color = Color.green
   
   // we’re going to draw the gizmo in local space
   Gizmos.matrix = transform.localToWorldMatrix;   
   
   // draw a box collider based on its size
   var box:BoxCollider = GetComponent(BoxCollider);
   Gizmos.DrawWireCube(box.center, box.size);
}

Trigger Tips and Caveats

This article mentions a few already, but to recap:

  • Other triggers will “collide” with triggers! Use this for invisible triggers that don’t collide with your actual physics (spawn point triggers, etc).
  • Triggers do respond to raycasts! Make sure your triggers are set to ignore raycasts, unless you really want to raycast against them.
  • An object destroyed inside of a trigger will not fire OnTriggerExit. If you track objects be wary of this. The only sure way to count objects currently inside of a trigger is to use OnTriggerStay
  • Triggers are fast! Use them! Even in non-physical games you will see speed increases. For example, you could have a tower defense game where your enemies are kinematic rigidbodies and turrets track targets in range using triggers.
  • There is a penalty for moving static colliders (a Collider with no Rigidbody component). If you want to move your trigger around, add a Rigidbody and set it to be kinematic.
  • Triggers are great on the iPhone, since everything happens in highly optimized PhysX. You’d be surprised.

Example Project

The above-mentioned scripts are included in a quick example project (so don’t worry about copying and pasting off the post). Download the example project and check them out!

Other Uses?

How have you guys been using triggers? Share your own tips and tricks in the comments!

Posted in Code Snippets | Tagged | 3 Comments

All Hail Camera.RenderWithShader

“one day there was no internet in the office, so I did not know what bugs to fix… so I played around with this instead :) – Aras Pranckevičius

That day of internet outage is probably one of the happiest days for me using the Unity3d engine. On that fateful day Camera.RenderWithShader was born which lived on to become the backbone of almost all of our post processing effects. The best way to explain how this makes a world of difference in our projects is to just give examples of how we handled post processing effects before and after replacement shaders.

Jetpack Brontosaurus

What We Did

We haven’t posted a graphics postmortem for Jetpack Brontosaurus yet. This post will simplify the techniques we used in order to demonstrate how we accomplished the “multiple dimension” effect. Each object in Jetpack Brontosaurus has at least three renderers associated with it. One for the nightmare/death dimension, one for the dream/living dimension, and one for the mask that determines which dimension gets rendered to the screen. The renderers are divided into each dimension by using different layers. We have a different camera for each dimension. One camera is set to render all the renderers in the nightmare dimension, one camera is set for the dream dimension, and one for the mask. These cameras output their renders to separate render textures. Then we combine the two dimensions based on the color in the mask.

The implementation required each of the three renderers to have its own game object, material, and layer. As a result, the complexity of the scenes increased, producing redundant information and human errors.

What We Could Have Done

If we had the ability to use replacement shaders during Jetpack Brontosaurus production, everyone’s life would have been easier. Most of the renderers in Jetpack Brontosaurus used a vertex color shader. Because the renderers shared the same shader, using a replacement shader is an easy change.

An important thing to note here is that all of the tags have the same Key/Value pair.

The dream replacement shader:

Shader “Bronto/Dream Replace” {
   SubShader {
      Tags {“RenderEffect”=“Multidimensional”}
      Pass {
         ColorMaterial AmbientAndDiffuse
         Lighting Off
         SetTexture [_DreamTex] {
            Combine texture * primary, primary
         }
      }
   }
}

The death replacement shader:

Shader “Bronto/Death Replace” {
   SubShader {
      Tags {“RenderEffect”=“Multidimensional”}
      Pass {
         ColorMaterial AmbientAndDiffuse
         Lighting Off
         SetTexture [_DeathTex] {
            Combine texture * primary, primary
         }
      }
   }
}

The mask replacement shader:

Shader “Bronto/Mask Replace” {
   SubShader {
      Tags {“RenderEffect”=“Multidimensional”}
      Pass {
         Lighting Off
         Color [_MaskColor]
      }
   }
}

Now we have one shader that has texture and color information for all of dimensions in a single material. No need for more than one renderer per object anymore!

The shader used by the material for the renderers (this shader is also used for rendering the object to the scene view):

Shader “Bronto/Multidimensional Object” {
   Properties {
      _DreamTex (“Dream Dimension Texture”, 2D) = “white” {}
      _DeathTex (“Death Dimension Texture”, 2D) = “white” {}
      _MaskColor (“Mask Color”, Color) = (1,0,0,1) // Alpha used for interpolating between the two textures in the scene view
   }
   SubShader {
      Tags {“RenderEffect”=“Multidimensional”}
      Pass {
         ColorMaterial AmbientAndDiffuse
         Lighting Off
         SetTexture [_DreamTex] {
            Combine texture
         }
         SetTexture [_DeathTex] {
            constantColor [_MaskColor]
            combine previous lerp(constant) texture
         }
         SetTexture [_DeathTex] {
            combine previous * primary
         }
      }
   }
}

For ingame rendering the material shader is never used. The only shaders used are the replacement shaders.

Now we have a script on the scene’s camera that renders the scene with each replacement shader and then composites them. The scene’s camera should be set to render nothing in its culling mask.

#pragma strict
@script ExecuteInEditMode
@script RequireComponent (Camera)

// The culling mask that should be used for rendering
var cullingMask : LayerMask;

// The replacement shaders
var dreamReplacementShader : Shader;
var deathReplacementShader : Shader;
var maskReplacmentShader : Shader;

// The magic composite material
var dimensionCompositeMaterial : Material;

// The render textures for each dimension
private var dreamRT : RenderTexture;
private var deathRT : RenderTexture;
private var maskRT : RenderTexture;

// The camera that renders the replacement shaders (Don’t access this directly, use GetPPCamera())
private var ppCamera:Camera;

/**
* Handle any needed pre processing
*/

function OnPreCull() {
   // Start from nothing
   CleanRenderTextures();
   
   // Reference to ppCamera’s camera
   var cam:Camera = GetPPCamera();
   
   // Set up camera
   cam.CopyFrom(this.camera);
   cam.cullingMask = this.cullingMask;
   cam.clearFlags = CameraClearFlags.Skybox;
   cam.backgroundColor = Color(0.0,0.0,0.0,0.0);
   
   // Render Dream Dimension
   dreamRT = RenderTexture.GetTemporary(Screen.width, Screen.height, 16);
   cam.targetTexture = dreamRT;
   cam.RenderWithShader(this.dreamReplacementShader, “RenderEffect”);
   
   // Render Death Dimension
   deathRT = RenderTexture.GetTemporary(Screen.width, Screen.height, 16);
   cam.targetTexture = deathRT;
   cam.RenderWithShader(this.deathReplacementShader, “RenderEffect”);
   
   // Render Death Dimension
   maskRT = RenderTexture.GetTemporary(Screen.width, Screen.height, 16);
   cam.targetTexture = maskRT;
   cam.RenderWithShader(this.maskReplacementShader, “RenderEffect”);
}

/**
* Post Processing magic
* @param source
* @param destination
*/

function OnRenderImage(source:RenderTexture, destination:RenderTexture)
{
   // We do nothing with the source render texture, the camera didn’t do anything to it anyway!
      
   // Magically composite the render textures together into the final render
   // The shader used in the dimensionCompositeMaterial for compositing these textures is outside the scope of this post
   // Will have a post about CG full screen post processing effects sometime in the future
   RenderTexture.active = destination;
   dimensionCompositeMaterial.SetTexture(“_DreamRender”, dreamRT);
   dimensionCompositeMaterial.SetTexture(“_DeathRender”, deathRT);
   dimensionCompositeMaterial.SetTexture(“_MaskRender”, maskRT);
   GL.PushMatrix ();
      GL.LoadOrtho ();
      for (var i:int = 0; i < dimensionCompositeMaterial.passCount; i++) {
         dimensionCompositeMaterial.SetPass (i);
         DrawQuad();
      }
   GL.PopMatrix ();
   
   // Clean up
   CleanRenderTextures();
}

/**
* Cleanup if we get disabled
*/

function OnDisable()
{
   CleanResources();
}

/**
* Camera that renders the replacement shaders
* ppCamera getter
* @return
*/

private function GetPPCamera():Camera
{
   // Create the shader camera if it doesn’t exist yet
   if(!ppCamera) {
      ppCamera = new GameObject(“PPCamera”, Camera);
      ppCamera.camera.enabled = false;
      ppCamera.hideFlags = HideFlags.HideAndDontSave;
   }
   
   return ppCamera.camera;
}

/**
* Cleanup all resources used for Post Processing
*/

private function CleanResources()
{
   if(ppCamera)
   {
      DestroyImmediate(ppCamera);
   }
   CleanRenderTextures();
}

/**
* Cleanup Temporary RenderTexture resources
*/

private function CleanRenderTextures()
{
   if(deathRT != null) {
      RenderTexture.ReleaseTemporary(deathRT);
      deathRT = null;
   }
   if(dreamRT != null) {
      RenderTexture.ReleaseTemporary(dreamRT);
      dreamRT = null;
   }
   if(maskRT != null) {
      RenderTexture.ReleaseTemporary(maskRT);
      maskRT = null;
   }
}

Blush

Glow

The Pro Standard Assets package that comes with Unity has a simple glow effect. Their implementation uses the alpha channel of the destination render texture to decide where to render glow. This limits us a couple ways. We can not have multicolored glow effects and we can not use the alpha channel for anything else. We decided to just bite the bullet and have a separate render texture for rendering glow.

Now we’ve freed up the destination render texture’s alpha channel for something else to use and we can have glow any color we like. Other than the additional memory needed for a 32bit render texture there’s a large disadvantage to doing it our way. Glow is no longer occluded by other geometry. A workaround for this is to have objects that you don’t want to glow render black to the glow render texture. Since glow is an additive pass, anything that is black will do nothing to the original image.

The Glow Replace Shader:

Shader “Blush/Glow Replace” {
   SubShader {
      Tags { “RenderEffect”=“Glow” }
      Pass {
         Fog { Mode Off }
         Color [_Glow_Color]
      }
   }
}

For every object we wanted to glow we added two things to the object’s original shader.
- Tag: “RenderEffect” = “Glow”
- Property: _Glow_Color (“Glow Color”, COLOR) = (1,1,1,1)

Distortion

Distortion is handled in a few steps.
1. Render the scene to a render texture.
2. Render a 2 dimensional “normal” map to a render texture.
3. Draw the scene’s render texture to the screen offsetting each texels’ texture coordinate by the amount specified by the 2 dimensional “normal” map texture.

Blush used a constantly oscillating full screen normal map to distort the scene. This distortion helped established an underwater feel. We used replacement shaders to render directly to this normal map to further modify the distortion. The artists used these shaders on particles to provide distortion effects on fast-moving tentacles, enemies, and other places.

The Particle Distortion Replace Shader:

Shader “Squiddy/Post Processing/Distortion Replace” {
   Properties {
      _BumpMap (“Bump (RGB)”, 2D) = “bump” {}
   }
   SubShader {
      Tags { “RenderEffect”=“Distort” }
      Pass {
         Lighting Off
         ZWrite Off
         Blend SrcAlpha OneMinusSrcAlpha
         BindChannels {
            Bind “Color”, color
            Bind “Vertex”, vertex
            Bind “Texcoord”, texcoord
         }
         SetTexture[_BumpMap] {combine texture, texture * primary}
      }
   }
}
Posted in Graphics | Tagged , , , | 3 Comments

UnityDevelop, An Editor for Unity JavaScript

Choosing a language for Unity development is a tricky thing.  There are a lot of great reasons to use C# (like its tool ecosystem), but other, just-as-great reasons to use JavaScript (like its less verbose and more accessible nature).  The Unity forums have endless debates about language choice.

The availability of Intellisense with Visual Studio is a huge reason to use C#.  However, we liked the simplicity of Unity’s JavaScript, its similarity to Flash’s ActionScript, and some of its automagical features like compiler support for yield.  Ultimately we decided to use UnityScript here at Flashbang, and modified FlashDevelop to provide an editor environment with code completion.  We use FlashDevelop for our Flash work already, so this made sense.


Watch this in HD!

We’ve been using this tool for over a year now, internally.  It’s still a hack–so your mileage may vary–but we’ve been quite happy with it!  It’s taken us awhile to scrounge together a release, but here it is:

Installing UnityDevelop

1) Get Windows.  UnityDevelop is a modified FlashDevelop.  Unfortunately, FlashDevelop is Windows-only, which means you’ll need to find some way to run Windows alongside your Mac (unless you’re using Unity 2.5 by now, in which case you’re probably golden)!  We recommend using Synergy to share a keyboard/mouse between both monitors, or virtualizing Windows entirely.

If you virtualize Windows, seriously look into a stripped-down version of XP.  You can use nLite to do this yourself, or you can download a pre-made ISO if you don’t mind dipping into the shadier areas of the Internet.  This will reduce the memory footprint of XP tremendously (the popularity of Netbooks means there are endless tutorials and pre-made variants available).  If you run multiple monitors, you probably don’t want to run the Coherence/Unity feature, which can be slow with a ton of desktop real estate.  We run Parallels in windowed mode.

2) Download UnityDevelop (2.9 MB).

3) You probably want the classes for Unity 2.6 (replace the files in your UnityDevelop\Classes directory).

4) Unzip, and copy the UnityDevelop directory to “C:\Program Files”.  There are some hard-coded paths in here; apologies if you organize your apps differently!

Creating a Project

UnityDevelop works best if you have your scripts in a single, scripts-only directory to begin with.  Open up UnityDevelop, and go to Project->Create Project.  We directly access our Unity project files via a network share; this is fine.

Select “Empty Project” as your template, give it a name, and browse to your scripts directory.  This will look like:

A “project” is just a pointer to a directory.  If you add new files in Unity, they will show up here.  If not, click the refresh icon in the “Project Explorer” pane, or, in a worst-case scenario, just restart UnityDevelop.  In general it’s easiest to make new files in UnityDevelop directly.  The same restrictions apply here, though–make sure you do all of your script renames in Unity itself (if you rename outside of Unity it’ll look like one file was deleted and the other freshly created).

Code Editing

Now, edit away!  You’ll get autocompletion for built-in Unity scripts, as well as your own files.

Hooray!

Tips, Tricks, and Questions

Changing the Font

Our default settings file uses Consolas, which is a great programming font.  If you want to change the font, close UnityDevelop, edit Settings\ScintillaNET.xml, and restart.

Project-Wide Search

CTRL-I is the shortcut to find in project.  It populates the search terms with your selection–you can easily select something, hit CTRL-I, enter, and see results immediately.  Nice!

Goto Declaration

F4 will go to the declaration of a function.  You’ll end up in the intrinsic class files for anything Unity-specific.

Editing Into Two Directories

If you get crafty with symlinks you can edit into your “Scripts” directory and the “Editor” directory.  On your Mac, create an empty folder, and then create symlinks into your project folder for both.  Create the UnityDevelop project file in this new directory.  If none of this makes sense, don’t worry (Pro Tip:  Use a network share to access your Mac; this won’t work with VMWare/Parallels built-in sharing).

I Can’t It to Work.  Can You Help Me?

Honestly, no, we can’t.  Sorry!  It took us forever just to release this, and all we had to do was zip up some files and write this post.  We don’t have time to support it, so you’re kind of on your own.

I Added Some New Stuff.  Do You Guys Want It?

Yes please!  It would be awesome to see this grow.  Just drop us an email! FlashDevelop 3 is almost done, too, if someone wants to take a stab and making the same hacks.

Here is the source code, by the way.

Posted in Tools | Tagged , | 33 Comments

A Cocoa-Based Frontend For Unity iPhone Applications

I spent about 3 months at the end of 2008 knee-deep in Unity iPhone — first testing the beta and then working with the release version. I spent tons of time just playing with it, learning its capabilities and how to optimize for it. That’s a whole other post though, which I’ll get to sometime in the future. For now I want to talk about the Cocoa frontend that I developed for all our Unity iPhone games.

Why use a Cocoa frontend?

We wanted a way to allow players to login using their Blurst user id in our iPhone games, but Unity iPhone doesn’t yet support the iPhone keyboard. We could have simply used the device id to let users pair their account via the webpage, but I wanted a more elegant solution. Furthermore, after working on iSplume (which we coded entirely using Objective-C), I found that I could make menus much faster in Apple’s Interface Builder than in UnityGUI. Adam and I planned a fair number of menus in Rebolt, so I wanted a way to make them in Cocoa/Interface Builder.

So I set a goal: Make an easily extensible Cocoa frontend for Unity iPhone that supports Blurst logins and supports any menus we might want. It should work for any project we add it to, so we don’t have to do tons of custom code for every game. Further, it should require changing as little of ReJ’s existing Objective-C AppController code as possible, in the event that it changed in a later build. Finally, I wanted an easy way to add my additional files to the XCode project once I created a build. This is particularly important because, to maintain rapid iteration times, there must be a minimal amount we have to do in XCode between creating a build and installing that build on the phone.

Basic Architecture

The basic idea is that we write our own UIApplication delegate to replace AppController, and then forward events like applicationWillTerminate: to the existing AppController once we’ve started the Unity content. We’ll also keep a loop running in the background once the Unity content is created that checks the PlayerPrefs file. We’ll use this to get back to the menus from within the Unity content. Finally, we’ll organize the Cocoa content in such a way that we can easily add it to the XCode project that Unity iPhone spits out. We’ll use a PostprocessBuildPlayer Perl script to accomplish this.

Writing our own UIApplication delegate

Our UIApplication delegate needs to do a few different things. First, we want to handle the usual application event callbacks — applicationDidFinishLaunching:, applicationWillResignActive:, etc. Unless we’re particularly interested in some event, or we’re doing something more complex than just menus, we will just forward most of these messages to ReJ’s AppController. The notable exception is applicationDidFinishLaunching:, which we will use to launch our frontend and add a scheduled timer to the app’s run loop that will listen for menu return requests.

We’ll also want a few functions for switching between Unity and Cocoa content. We’ll create launchFrontend, cleanupFrontend, and launchUnity methods to handle switching content. We’ll also create a checkForReturnToMenu: method that our scheduled timer will call regularly. This will read the PlayerPrefs file and, if it finds a specific key, hide the Unity content and re-launch our frontend.

Here’s a zipped copy of the default Flashbang UIApplication delegate files — download it and follow along as I describe the various sections.
flashbangfrontend.zip

First, we’ll take a look at the header file. Note that we explicitly adopt the UIApplicationDelegate protocol. We also keep references to the application window and the Unity AppController.

#import "FBGameSettings.h"
#import "FBScene.h"
#import "FBSceneManager.h"
#import "FBSceneSetup.h"
#import "AppController.h"

// Integer tag that we use to distinguish the Unity view
#define UNITY_VIEW_TAG 1

@interface FBFrontendController : NSObject
{
   // A local reference to the app’s window
   UIWindow *window;
   // ReJ’s original AppController that runs Unity
   AppController *unityController;
}

- (void)checkForReturnToMenu:(NSTimer *)timer;
- (void)launchFrontend;
- (void)launchUnity;
- (void)cleanupFrontend;

@end

The specific details of the headers I’m importing are mostly unimportant. FBGameSettings.h is just some game-specific #defines (version number, etc). FBScene is a subclass of UIViewController and represents a generic menu scene. Specific scenes needed by an individual game are subclasses of this. FBSceneManager keeps a hash table of all scenes and handles transition animations between them. We’ll take a closer look at FBSceneSetup.h later.

Starting the Application

Now let’s take a look at the implementation of FBFrontendController. We’ll look at the applicationDidFinishLaunching: method first.

- (void)applicationDidFinishLaunching:(UIApplication *)application
{
   [application setStatusBarHidden:YES animated:NO];

   // Clear keys that signal transitions back and forth from
   // Unity, just in case
   [FBPlayerPrefs deleteKey:@“_start_cocoa”];
   [FBPlayerPrefs deleteKey:@“_start_unity”];

   // reset blurst logged in status
   [FBPlayerPrefs deleteKey:@“blurst_online”];

   // Start listening for signal to return to menus
   [NSTimer scheduledTimerWithTimeInterval:1.0 target:self
      selector:@selector(checkForReturnToMenu:)
      userInfo:nil repeats:YES];
   [self launchFrontend];
}

Here, we first delete the two PlayerPrefs keys we’ll use to communicate that we want to swap between Cocoa and Unity — _start_cocoa and _start_unity. This ensures that we know their initial states. Note: Unity iPhone stores PlayerPrefs using NSUserDefaults. FBPlayerPrefs is just a wrapper for NSUserDefaults that behaves like the PlayerPrefs class in Unity. We then begin a timer that runs our checkForReturnToMenu: method once per second. A shorter delay here means faster responsiveness for opening the frontend from within Unity, while a longer delay will give better performance. Finally, we run our launchFrontend method. Here’s the checkForReturnToMenu: method.

- (void)checkForReturnToMenu:(NSTimer *)timer
{
   if([FBPlayerPrefs getInt:@“_start_cocoa” orDefault:0] == 1)
   {
      [FBPlayerPrefs deleteKey:@“_start_cocoa”];
      [self launchFrontend];
   }
}

This simply checks for the proper key in PlayerPrefs and then launches the frontend if it finds it.

Launching Our Frontend

Here’s the launchFrontend method, which we call whenever we want to display our Interface Builder-constructed menu system:

- (void)launchFrontend
{
   // re-sync the PlayerPrefs file, in case we’ve been in Unity
   [FBPlayerPrefs readPrefsFile];

   // Create the window if we don’t already have one
   if([UIApplication sharedApplication].keyWindow == nil)
   {
      window = [[UIWindow alloc] initWithFrame:[[UIScreen
         mainScreen] bounds]];
      [window makeKeyAndVisible];
   }

   // Check to see if any views are exclusive/multitouch (ie find
   // the Unity EAGLView). Temporarily disable it, and tag it
   // so we can find it later
   for(UIView *v in window.subviews)
   {
      if(v.exclusiveTouch && v.multipleTouchEnabled)
      {
         v.exclusiveTouch = NO;
         v.multipleTouchEnabled = NO;
         v.tag = UNITY_VIEW_TAG;
         v.hidden = YES;
      }
   }

   // load menu scenes
   LoadScenesInWindow(window);

   // start the first scene
   [FBSceneManager startScene:FIRST_SCENE
      withTransition:FBSceneTransitionNone];
}

Here we create the application window if it doesn’t exist, we disable interaction with the Unity view if it’s present, then we load our custom views in the window. Finally, we start our first scene using the scene manager. LoadScenesInWindow is a function defined in FBSceneSetup.h — we’ll take a quick look at that.

#define FIRST_SCENE @"Title"  // The string key for our first scene

// load in game-specific scenes and define function to load them
#import "SceneCredits.h"
#import "SceneOptions.h"
#import "SceneTitle.h"
#import "SceneGame.h"

void LoadScenesInWindow(UIWindow* window)
{
   [window addSubview:[[[SceneCredits alloc] init] view]];
   [window addSubview:[[[SceneOptions alloc] init] view]];
   [window addSubview:[[[SceneTitle alloc] init] view]];
   [window addSubview:[[[SceneGame alloc] init] view]];
}

Recall that each SceneXXXX is a subclass of FBScene, which is a subclass of UIViewController. The init method of each SceneXXXX first calls initWithNibName: with the name of each scene’s Interface Builder xib, then adds the scene to the scene manager with an appropriate string-based key. The LoadScenesInWindow function thus initializes each view controller and adds its view to the window. The idea behind this architecture is that for any given project, we only have to edit FBSceneSetup.h, and then create the appropriate SceneXXXX subclasses to manage each of our scene xibs. That is, FBSceneController will be the same for every project.

Launching Unity

Now we’ll take a look at how we launch the Unity content.

- (void)launchUnity
{
   [self cleanupFrontend];
   // Tell Unity loading scene to stop holding
   [FBPlayerPrefs setInt:1 withKey:@“_start_unity”];
   [FBPlayerPrefs saveAndUnload];

   // If we’ve not yet started the Unity content, run its startup
   if(unityController == nil)
   {
      unityController = [[AppController alloc] init];
      [unityController applicationDidFinishLaunching:
         [UIApplication sharedApplication]];

      // Set our window to the one created by ReJ’s AppController,
      // for any future use
      window = unityController->_window;
   }

   // If we’ve already got Unity content running, show it and
   // return its Exclusive/multitouch status
   else
   {
      for(UIView *v in window.subviews)
      {
         if(v.tag == UNITY_VIEW_TAG)
         {
            v.exclusiveTouch = YES;
            v.multipleTouchEnabled = YES;
            v.hidden = NO;
            [window bringSubviewToFront:v];
         }
      }
   }
}

We start out by cleaning up the frontend (which just tells the scene manager to remove its views from the window and releases them). We then set the _start_unity PlayerPrefs key, so that our loaded Unity content will know we want it to start executing. After that, there are two possible codepaths. The first time we call the method, it will call AppController’s applicationDidFinishLaunching: method, and then point our local window reference to the one created by AppController. Once the Unity content is initialized (if we’ve come back to the menu and then want to return to Unity), we find our Unity view by the tag we set in launchFrontend, return it to the front, and re-enable interaction with it.

We’ll typically want to run this method in response to a button press in some view. Here’s an example use:

- (IBAction)playButtonPressed:(id)sender
{
   [(FBFrontendController *)[UIApplication
      sharedApplication].delegate launchUnity];
}

Using the Frontend From Unity

To get back to the frontend from Unity, we just need to set the _start_cocoa PlayerPrefs key. Since the Unity content will continue running in the background, you’ll also want to pause your game and have a loop continue to check for the _start_unity PlayerPrefs key.

Putting It All Together — Project Organization and PostprocessBuildPlayer

So this frontend stuff is all well and good, but it would be a royal pain in the ass if we had to copy the files into the project manually and edit ReJ’s files every time we made a build. But, as usual, it’s Perl to the rescue!

We’ll first create a directory inside our project that will contain all of our frontend files. We’ll set it up so that we can also replace the application icon and splash screen in the same pass. First, create an “XCode” directory within the Unity project. Any files that you want overwritten in the default XCode project should be in the same locations with the same names. So for instance, we’ll add Icon.png and Default.png to the root of the XCode directory. Put anything that you’re adding to the project (all the frontend files) into a separate “Frontend” sub-directory. So your tree should end up something like this:

XCode Directory Tree

Now we’ll take a look at the PostprocessBuildPlayer script. If you aren’t familiar with it, check the Build Player Pipeline section of the Unity Manual. Here’s my script, written in Perl:

#!/usr/bin/perl

#################################################################
# Build Player postprocessor for Unity iPhone projects. Injects #
# FBS Frontend into generated XCode project                     #
#################################################################

# Path for assets that will get added to the XCode project.
# Relative to the project root directory.
$iPhoneAssetPath = “./Assets/XCode/”;
$toPath = $ARGV[0];

##########################################################
# Copy iPhone assets from Unity project to XCode project #
##########################################################

opendir(XCODEDIR, $iPhoneAssetPath) || die(“Cannot open directory $iPhoneAssetPath”);
@files = readdir(XCODEDIR);
closedir(XCODEDIR);

# copy files from Unity iPhoneAssetPath to the generated XCode project
foreach $file (@files)
{
   # kind of a lazy hack
   unless(($file eq “.”) || ($file eq “..”))
   {
      #`echo $file > log.txt`;
      $fromPath = $iPhoneAssetPath.$file;
      `cp -R \‘$fromPath\’ \’$toPath\’`;
   }
}

################################################################
# Change default UIApplicationDelegate to FBFrontendController #
################################################################

$omPath = $toPath."/Classes/main.mm";
$nmPath = $toPath."/Classes/main.mm.tmp";

open OLDMAIN, "<", $omPath or die("Cannot open main.mm");
open NEWMAIN, ">", $nmPath or die("Cannot create new main.mm");

while(<OLDMAIN>)
{
   $_ =~ s/\”AppController\”/\”FBFrontendController\”/;
   print NEWMAIN $_;
}

close OLDMAIN;
close NEWMAIN;

`mv "$nmPath" "$omPath\’`;

#################################################
# Make _window variable in AppController public #
#################################################

$oacPath = $toPath."/Classes/AppController.h";
$nacPath = $toPath."/Classes/AppController.h.tmp";

open OLDAC, "<", $oacPath or die("Cannot open AppController.h");
open NEWAC, ">", $nacPath or die("Cannot create new AppController.h");

while(<OLDAC>)
{
   if($_ =~ m/UIWindow.*window/)
   {
      print NEWAC "\t\@public\n";
   }
   print NEWAC $_;
}

close OLDAC;
close NEWAC;

`mv "$nacPath" "$oacPath"`;

As you can see, this script does three things — copy all the files from our XCode directory to the built XCode project, change the UIApplicationDelegate in main.mm to FBFrontendController, and make the _window variable of AppController public. So we’ve managed to implement our own frontend by changing only two lines of existing code!

Building the Project

I know of no way to add files to an XCode project via the command line, so you will still have to manually add the files to the project after building. However, because of the way that we’ve organized the project, this will be relatively simple. It does mean, however, that “Build and Run” will no longer work as a one-click solution.

Build the project in Unity and open the generated project in XCode. Select Project -> Add To Project… (Cmd + Option + A), and select “Frontend” directory. In the next dialog, choose to “Recursively create groups for added folders:

Add files to project dialog

That’s it! You’ll now have the references needed to run your custom frontend before the Unity content — simply build and run in XCode!

Posted in Tutorials, iPhone | 17 Comments

Unity Basics: An Introduction

Unity 2.5′s release is finally on the horizon, which means Windows editor!  In addition to some great new workflow and editor features, 2.5 will also usher in a wave of Windows users.  There is a lot to learn about Unity, when you first encounter it, so we’re going to do a series of Unity Basics posts, with an introduction to some of the core concepts.  This first post answer the question:

What is Unity?

Unity’s scope makes concise definition difficult.  Unity is a lot of things, and it’s used differently by different disciplines, but here’s one breakdown. Unity is:

An Integrated Editor

Unity provides an editing environment where you organize your project assets, create game objects, add scripts to these objects, and organize objects and assets into levels.  Most importantly, Unity provides a “game” view for your content.  You can hit play and interact with your content while you watch values, change settings, and even recompile scripts.

The IDE is largely stateless, in that there is little distinction between creating your levels and playing them.  For example, the editor remains functionality identical whether your content is stopped or currently playing.  This is hugely useful, because while your content is playing you can hit pause and then move things, create new objects, add scripts, and whatever else you need to do to test gameplay or chase down bugs.

Different team members and disciplines use the editor differently.  Here at Flashbang artists use the editor to smoke test new asset imports, arrange assets into levels, and tweak textures and other visuals.  A programmer may focus more on watching values and tweaking numbers.  A unified interface helps us tremendously; we don’t have people using different tools with different interfaces and workflow conventions.

A Component Architecture Paradigm

Unity utilizes a component-based architecture.  You could ignore this in creating your game logic, but you will suffer without a clear understanding of Unity’s design.  In Unity, every object in your scene is a GameObject.  An arbitrary number of Components are attached to GameObjects to define their behavior.

For example, a physical crate might be:

  • GameObject
    (name, layer, tags)

    • Transform
      (position, rotation, scale, parent)
    • Mesh Renderer
      (actually draws the object)
    • Box Collider
      (define collision volume)
    • Rigidbody
      (movable physics object)

Here’s the key:  When you create a script, you create a component. For example, you could create a Jump.js script to make your cube jump when you press a key, like:

var strength:float = 30.0;

function Update()
{
   if(Input.GetKeyDown(“space”))
      rigidbody.AddForce(Vector3.up * strength)
}

JavaScript hides some of the details, but what’s happening here is you’ve created a new Jump Component.  Your script implicitly inherits from MonoBehaviour, which inherits from Behaviour, which inherits from Component.  You now have a new component, which you can easily add to your crate!

Note:  We use JavaScript at Flashbang, for a variety of reasons, but the steps are quite similar in C#.  The only real different, aside from syntax, is that you need to explicitly define the inheritance:

using UnityEngine;
using System.Collections;

public class Jump : MonoBehaviour
{
   public float strength = 30.0f;

   void Update ()
   {
      if(Input.GetKeyDown(“space”))
         rigidbody.AddForce(Vector3.up * strength);
   }
}

A Game Engine

Unity is a fully-featured game engine.  It includes and exposes many systems needed for game creation, such as:

  • Graphics Engine
    Unity’s graphic engine includes a shader language, ShaderLab, which wraps Cg and GLSL shaders with additional engine semantics.
  • Physics Engine
    Unity uses NVIDIA PhysX for their physics engine, with editor and API integration (you set up collision volumes, joints, and things by adding components to your GameObjectsin the editor, and script physics with things like Rigidbody.AddForce() and MonoBehaviour.OnCollisionEnter() callbacks).
  • Audio Engine
    Unity has a positional audio system.  You can play sounds in 3D space, or “2D” stereo sounds.
  • Animation System
    Unity includes an animation system, including support for animation layers, blending, additive animations and mixing, and real-time vertex/bone reassignment.

There are quite a few other out-of-the-box systems there to help you, too, like particle systems, networking, UnityGUI, and so on.

A Scripting Platform

Unity embeds Mono to power its scripting environment.  You can script in C#, JavaScript, or Boo (a Python variant).  Mono itself is an open-source version of the .NET development environment.  Note that this doesn’t mean that Unity requires .NET.  Mono is totally distinct from Microsoft’s .NET, and Unity totally embeds Mono.

It’s also worth noting that Unity’s use of Mono goes above and beyond the compiler and common language runtime.  You also get the full .NET namespace, which means a huge amount of classes are available to you out of the box:  XML parsing, cryptography, sockets, and more.

As I mentioned earlier, Flashbang uses JavaScript.  Unity’s JavaScript isn’t very similar to the JavaScript language found in web browers.  It’s about as similar to web JavaScript as Flash’s ActionScript, which is to say not very similar at all.  There are a number of advantages to using UnityScript, although there are far fewer tools available (we use a custom-created hack of FlashDevelop).  With the advant of the Windows release, I imagine many newcomers will choose C# and Visual Studio.  Lucas Meijer makes a very compelling case for C# scripting over at his Unity blog.

A Scripting API

Your Mono-powered scripts have full access to Unity’s engine through Unity’s scripting API.  They’ve exposed the entirety of the engine, which means that you can do pretty much anything.  High level stuff is quite easy and simple, but you can dig all the way down to doing mesh generation or direct OpenGL calls if you’d like (which work on DirectX, thanks to Aras’ genius).

MonoBehaviour, the parent class for your scripts, provides a number of convenience members.  Things are usually quite straightfoward.  For example:

  • transform.position = Vector3.zero;
  • rigidbody.AddForce(Vector3.up * 10);
  • renderer.enabled = false;
  • particleEmitter.Emit(10);

There are a number of callbacks provided for game logic purposes:

  • Start()
  • Update()
  • OnCollisionEnter()
  • OnMouseDown()

The scripting environment supports coroutines, which are magically useful for all kinds of things.  Want to destroy an object 3 seconds after it was hit with something?

function OnCollisionEnter()
{
   // do something
   yield WaitForSeconds(3.0);
   Destroy(gameObject);
}

In addition to scripting your game at runtime, Unity provides a powerful editor API to create custom tools, windows, and shortcuts to expedite your workflow in the editor itself.  With Unity 2.5 the entire editor itself has been rewritten with the Unity API (which means you should be able to do practically anything they have).

More to Learn!

As you can see, Unity is quite broad.  This article should provide a good starting point for its top-level features, but even this overview hasn’t covered the whole of the software.

We’ll dive into more Unity features in depth in future Unity Basics articles.  We didn’t talk much about the Inspector (and how variables are serialized in Unity), or asset importing, or any number of other features.  What would you guys like to focus on?  Use the comments below to let us know!

Posted in Tutorials | Tagged | 7 Comments