Sponsored By

Opinion: The Six Misconceptions I Had About Unity

The Binary Refinery director Richard Fine describes the top six misconceptions he had about Unity 3D, including misunderstandings on the Scene View and game objects, in this <a href="http://altdevblogaday.org/">#altdevblogaday</a>-reprinted opinion piece.

Richard Fine, Blogger

May 25, 2011

15 Min Read

[The Binary Refinery director Richard Fine describes the top six misconceptions he had about Unity 3D, including misunderstandings on the toolkit's Scene View and game objects, in this #altdevblogaday-reprinted opinion piece.] The Unity 3D toolkit is a great platform for development – but it can be quite confusing to a developer coming from a more traditional, 'code it all up' kind of background. I remember my first experiences with early (1.x) versions of Unity; launching it, I was confronted with something that looked like the level editors I was used to. How was I supposed to build an entire game in just a level editor? Was it visual scripting or something, like the Half-Life modding I was used to? How could it possibly be fast enough? Maybe it was like the 'Game Maker' packages I'd used – great as a toy, but a painful experience if you wanted to build an actual polished, shippable game? The truth is that Unity is an incredibly powerful, versatile tool; however, for a person used to a more traditional development process, it can be a bit difficult to grasp at first. I've been working extensively with Unity for about 8 months now, and I'm at a point where I'm really comfortable with it – to the extent that I start prototyping all new ideas in Unity, unless there's a really good reason not to – and I've started extolling its virtues to others. Frequently, I see those people then run into exactly the same culture shock I first experienced with it. So, I thought I'd write down a few notes about the things I've learned – the things I wish someone had told me when I first started getting into it. Misconception 1: It's All About the Scene View By far the most visually impressive and dominating part of Unity – especially if you load it up with the example project that ships with it – is the Scene View, presenting one or more 3D views of the game in 'editing mode.' This is all very bright and exciting, and tends to be the first thing people want to interact with. Anyone with a basic knowledge of 3D editing programs figures out how to fly the camera around pretty quickly, and starts clicking on things. It becomes very easy to start out by assuming that the scene view is at the center of your development process – that everything will be drag and drop, and that code is a rare and distant sideline. In practice, while the scene view is important, it's by no means the center of the development process. It's highly significant when you're doing 'level editing' type tasks – laying out the geometry that makes up your game world – and it also works as a nice intermediary environment for building up prefabs (more on those in a minute), but that is all. You'll still be writing code in a code editor which you can then make use of in the scene view, just as a traditional project would write code that the designers could then make use of in their level editor. Though I'm working with Unity as a development platform, I still spend most of my time in Visual Studio. Misconception 2: Every Kind of Game Object Will Have a Distinct, Concrete Class Unity has been my first substantial experience with a component-based architecture. Traditional games tend to have a concrete class for each kind of entity. Need a gun? Make a class for it, derived from 'PhysicalObject.' Need a special kind of gun that also fires grenades? Subclass your 'gun' class. One of the commercial games I worked on had an entity inheritance hierarchy that must have been at least 10 levels deep; each level added certain bits of functionality, like 'is renderable' and 'plays pre-recorded animations' and 'generates physics impulses' and 'is considered friendly to the player.' Inheritance-heavy architectures tend to be very popular with object-orientation newbies; they grok the meaning of an 'is-a' relationship, and then build massive towers out of it. More savvy – or more jaded – software architects know the value of composition over inheritance. It's a lot less fragile. Instead of saying that an enemy 'is-a' skinned, animated mesh, they say that an enemy 'has-a' skinned, animated mesh. Instead of the HugeSpiderEnemy class inheriting from SkinnedMesh, it has a SkinnedMesh member variable. Following this approach, these towering inheritance hierarchies turn into fairly simple classes that just contain a bunch of relevant components: a SkinnedMesh, an AnimationPlayer, a PhysicsInteractor, and so on, with a bit of glue in the class itself to keep all the different bits in sync. There's often still a bit of inheritance, as a neat way to encapsulate the glue code without sacrificing type interchangeability, but it's shallow. This makes it much easier to swap systems in and out, and removes the risk of changes in one system percolating down through everything via inheritance. Unity takes this composition-based approach to its logical conclusion by having you build all your game objects entirely using composition – even the 'glue code.' Every game object consists purely of composed parts. The end result is that if you did still have concrete classes for each kind of game object, the only difference between them would be the components that make them up. As such, you only need one concrete class for a game object – Unity appropriately calls it GameObject – and it acts as little more than a container for components. What about the 'glue' code, the stuff that would normally tie all the components together? You can build it as a component too. With the GameObject acting only as a component container, it's up to the individual components to sort things out between themselves. The glue component isn't one that you might expect to use on any objects other than this one – but the same was true of the dedicated class, too. Note too that component inheritance is still perfectly normal, so your 'glue' code components can participate in an inheritance hierarchy if appropriate. Components make it very easy to build objects that interact with a wide number of systems, without introducing strong dependencies between these systems. Some care must be taken to ensure that component interactions are handled nicely, but no more than previous approaches. Misconception 3: Prefabs Are Only For Making Repetitive Levels One of the ideas I encountered quite early on was the idea of the prefab. Prefabs are “reusable GameObjects,” according to the manual. When I first saw them, and saw that they were called prefabs, I thought: great! I can use those to quickly build up large scenes by effectively copying-and-pasting objects around. If I want to dot trees all over my terrain, I can use a bunch of tree prefabs. If I want buildings, or bits of buildings, I can create prefabs for those, too. Great! But that's really just a time-saving tool for the artists and level designers, surely… I'd missed the real point of prefabs. It's true that you can use them for easily copying objects around, but the really vital thing is that you can insert copies of them at runtime, too. They're not only about composing the scene in the editor. They're also about composing the scene, from code, at runtime, too. (For the more pattern-oriented amongst you, what I'm driving at here is the Prototype pattern. Feel free to refresh your memory so you can read the next bit with a knowing smile). Naturally there are many objects in your game that aren't in the scene when you start. Bullets in motion. Gibs. Enemies that haven't spawned yet. Sometimes, you could put these things into the scene and just mark them as inactive, but not always; you need the option to create things dynamically. Unity has that, of course: GameObject has a public constructor, so you can create the GameObject, set its name and layer, then use GameObject.AddComponent<>() to add all the components you need, then set up all the parameters, etc. There's a twofold problem with this approach, though: firstly, it's incredibly tedious, and secondly, it makes it really difficult for non-coders to change the way things are set up. If an artist or designer wants bullets to play a different sound or use a slightly different color tracer, they need a coder to go in and alter the spawn code. This is where prefabs come in. Instead of building a new object from scratch, what you can do is to create a clone of the prefab. Cloning an object duplicates all of its components, along with all of its children and their components, so you don't have to attach anything or set anything else up; the prefab acts like a prototype for the kind of object you want, a 'reference copy.' Furthermore, as the prefab is just a GameObject (or tree of GameObjects), it can be edited by artists and designers in exactly the same way as anything else in the scene; if the artist wants to make your plasma bullets glow more blue, he can click the prefab, find the relevant component, adjust the value, save the prefab, and run the game – no code changes required. As noted before, this is actually a well-established software pattern, known as the Prototype pattern. Misconception 4: You Need To Write Debugging Visualizers It's quite common to want to annotate the screen with extra information while your game is in development. Take AI as an example: it's often very useful to be able to see the path that your agent is trying to follow, or the state he's presently in, while the game is running. So you write a little bit of code that dumps the information into the top left corner of the screen using the GUI renderer, and watch it change as you go. That's what I was doing to start with, even in Unity – the ease of access to the GUI system from any component made it very easy – but I don't anymore, because I've realized that Unity already has a method for inspecting such data, and it's far superior: You can just use the editing tools as if the game weren't running. Say you're in the middle of the game, and one of the AIs is doing something funny. You can pause the game (if you like), pull up the hierarchy window, and click on him to view his inspector. Maybe you've got hundreds of AIs and it's not clear which one is him; in that case, you can switch to the Scene view, navigate to where you paused the game, and click on him to select him. The inspector panel will then let you look into the present state of all components, in exactly the same way as it will let you look at the initial state of them when the game's not running. Maybe the default information displayed isn't enough. In that case you've got two choices: firstly, you can try toggling the inspector into 'debug' mode, which causes various private members to show up (usually desirable when you're trying to inspect internal state); and secondly, you can write a custom inspector class for the component you're interested in, one that displays exactly the information you want in exactly the way you want it. There's a simple function call – DrawDefaultInspector() – that you can use to make sure that all Unity's existing information continues to be displayed, and then you can augment that information with any calculated or private values you might want. (Notably, Unity's default inspector does not show properties, only fields, so I often find myself just adding a couple of entries to show the present values of properties while the game is running). You can even pull the Game tab out from the Unity interface and drop it onto a second monitor. Now you can play the game on one monitor, and watch the scene view, hierarchy, and inspectors all on the other. Pausing the game isn't even necessary; all the views can update in real-time, while the game is running, quite happily. Misconception 5: There's No Spatial Indexing; You Have To Search The Whole Scene For Everything Unity supports finding objects in the scene based on their name, their tag, and the components attached to them, but it's slow. The physics engine – Nvidia's PhysX – doubtless does some scene management to accelerate collision tests, and the fruits of this are exposed to you as raycasts/linecasts/spherecasts in the Physics class, but these only apply to objects that the physics engine knows about; you can't use it to retrieve non-physics objects like particle effects, for example. There's nothing stopping you from maintaining your own lists and caches, though. If you're going to need to run through a list of all objects that have a component of type X, then type X can, quite happily, keep a private static list of all its instances, and provide a couple of functions for searching that list much more quickly than the general scenegraph functions would be. You need to be careful about exactly when you add and remove objects from this list – using OnEnable()/OnDisable() is likely to be a better plan than using Awake()/OnDestroy(), for example, because that way you won't have any problems with things like prefabs or inactive objects accidentally ending up on the list – but it's still fairly straightforward to do. Another approach I've found quite fruitful is to use tracking volumes. A tracking volume is a collision volume, marked as a trigger (i.e. look for collisions, but don't block any movement), with a simple script attached. When an object enters the volume, the script adds it to a list; when it leaves, it removes it; and when any other code wants to find out which objects are presently in the volume, the script can simply provide the list. This is exploiting the fact that the physics engine is doing highly optimized spatial checks anyway, so we might as well cache the results, instead of putting the physics engine on the spot right when we want the data. The tracking volume script can be completely generic for use with any object and volume setup, and you can augment it with useful features like filtering based on object type ('only track objects that have a Flammable component') or layers ('only track objects in the Pickup layer'). Misconception 6: Coroutines Are Useless There's a lot of work that a game needs to perform every frame – obviously there's all the core things like rendering, physics, animation, etc. There's also quite a lot of gameplay logic – responding to player input to initiate actions, for example. We want to minimize latency so we do it as quickly as possible, and that means keeping an eye out for it every frame. There are some bits of logic that don't really need to be checked for every frame, though. Take the win conditions: you want to check that the player's in a particular area, that he's killed particular enemies, collected certain items, and so on. Would it really kill the experience if there was a half-second delay between killing the final enemy and the win sequence kicking off? Frequently, no. But we tend to check every frame anyway, because adding the code into the per-frame update is easier than trying to schedule checks less frequently. Unity's coroutine support makes for a really nice alternative to this. Coroutines are used to implement cooperative multitasking. You can set up a long-running process – an infinite loop, for example – but stop it from blocking other code by having it 'yield' at suitable points. The scheduler is then very simple: it just cycles through all the coroutines, running each one in turn until it yields, then moving on to the next. Unity's scheduler is a bit smarter than this, though, because when a coroutine yields, it can emit a value. In Unity, this value is treated as a note to the scheduler about when the coroutine should next be run. You can yield a null to indicate that you want to continue on the next frame; you can yield a WaitForSeconds(0.5f) object to indicate that you don't need to run again until half a second has passed; you can yield an animation object to indicate that you don't need to run again until the animation has finished playing; and so on. This makes it very easy to turn an every-frame check into an every-three-seconds check: write the check in an infinite loop, with a yield-wait-for-three-seconds. Start the coroutine when the component is enabled, and stop it when it's disabled (or if you're mad like me, hook it up to a property setter, so that setting a CheckX flag property automatically starts and stops DoCheckXCoroutine). The value that you yield is evaluated at runtime, so you can do whatever you like to decide how long to yield for – maybe check more frequently when the player is getting close, or make the check interval a designer-configurable variable. Conclusion There's probably more I could say, but I'll leave it there for now, because I should really be working. I encourage you to give Unity a try, if you haven't already. [This piece was reprinted from #AltDevBlogADay, a shared blog initiative started by @mike_acton devoted to giving game developers of all disciplines a place to motivate each other to write regularly about their personal game development passions.]

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like