Unity 2.5’s release is finally on the horizon, which means Windows editor! In addition to some great new workflow and editor features, 2.5 will also usher in a wave of Windows users. There is a lot to learn about Unity, when you first encounter it, so we’re going to do a series of Unity Basics posts, with an introduction to some of the core concepts. This first post answer the question:
What is Unity?
Unity’s scope makes concise definition difficult. Unity is a lot of things, and it’s used differently by different disciplines, but here’s one breakdown. Unity is:
An Integrated Editor
Unity provides an editing environment where you organize your project assets, create game objects, add scripts to these objects, and organize objects and assets into levels. Most importantly, Unity provides a “game” view for your content. You can hit play and interact with your content while you watch values, change settings, and even recompile scripts.
The IDE is largely stateless, in that there is little distinction between creating your levels and playing them. For example, the editor remains functionality identical whether your content is stopped or currently playing. This is hugely useful, because while your content is playing you can hit pause and then move things, create new objects, add scripts, and whatever else you need to do to test gameplay or chase down bugs.
Different team members and disciplines use the editor differently. Here at Flashbang artists use the editor to smoke test new asset imports, arrange assets into levels, and tweak textures and other visuals. A programmer may focus more on watching values and tweaking numbers. A unified interface helps us tremendously; we don’t have people using different tools with different interfaces and workflow conventions.
A Component Architecture Paradigm
Unity utilizes a component-based architecture. You could ignore this in creating your game logic, but you will suffer without a clear understanding of Unity’s design. In Unity, every object in your scene is a GameObject. An arbitrary number of Components are attached to GameObjects to define their behavior.
For example, a physical crate might be:
(name, layer, tags)
(position, rotation, scale, parent)
- Mesh Renderer
(actually draws the object)
- Box Collider
(define collision volume)
(movable physics object)
Here’s the key: When you create a script, you create a component. For example, you could create a Jump.js script to make your cube jump when you press a key, like:
rigidbody.AddForce(Vector3.up * strength)
public class Jump : MonoBehaviour
public float strength = 30.0f;
void Update ()
rigidbody.AddForce(Vector3.up * strength);
A Game Engine
Unity is a fully-featured game engine. It includes and exposes many systems needed for game creation, such as:
- Graphics Engine
Unity’s graphic engine includes a shader language, ShaderLab, which wraps Cg and GLSL shaders with additional engine semantics.
- Physics Engine
Unity uses NVIDIA PhysX for their physics engine, with editor and API integration (you set up collision volumes, joints, and things by adding components to your GameObjectsin the editor, and script physics with things like Rigidbody.AddForce() and MonoBehaviour.OnCollisionEnter() callbacks).
- Audio Engine
Unity has a positional audio system. You can play sounds in 3D space, or “2D” stereo sounds.
- Animation System
Unity includes an animation system, including support for animation layers, blending, additive animations and mixing, and real-time vertex/bone reassignment.
There are quite a few other out-of-the-box systems there to help you, too, like particle systems, networking, UnityGUI, and so on.
A Scripting Platform
It’s also worth noting that Unity’s use of Mono goes above and beyond the compiler and common language runtime. You also get the full .NET namespace, which means a huge amount of classes are available to you out of the box: XML parsing, cryptography, sockets, and more.
A Scripting API
Your Mono-powered scripts have full access to Unity’s engine through Unity’s scripting API. They’ve exposed the entirety of the engine, which means that you can do pretty much anything. High level stuff is quite easy and simple, but you can dig all the way down to doing mesh generation or direct OpenGL calls if you’d like (which work on DirectX, thanks to Aras’ genius).
MonoBehaviour, the parent class for your scripts, provides a number of convenience members. Things are usually quite straightfoward. For example:
- transform.position = Vector3.zero;
- rigidbody.AddForce(Vector3.up * 10);
- renderer.enabled = false;
There are a number of callbacks provided for game logic purposes:
The scripting environment supports coroutines, which are magically useful for all kinds of things. Want to destroy an object 3 seconds after it was hit with something?
// do something
In addition to scripting your game at runtime, Unity provides a powerful editor API to create custom tools, windows, and shortcuts to expedite your workflow in the editor itself. With Unity 2.5 the entire editor itself has been rewritten with the Unity API (which means you should be able to do practically anything they have).
More to Learn!
As you can see, Unity is quite broad. This article should provide a good starting point for its top-level features, but even this overview hasn’t covered the whole of the software.
We’ll dive into more Unity features in depth in future Unity Basics articles. We didn’t talk much about the Inspector (and how variables are serialized in Unity), or asset importing, or any number of other features. What would you guys like to focus on? Use the comments below to let us know!