Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Jesse's back with the second of a series of articles featuring an in-depth discussion of programming principles, with a focus on how they apply in Video Game programming. The topic this time is a very important principle - "Ya Ain't Gonna Need It"

Jesse Attard, Blogger

October 8, 2015

13 Min Read

This is the second of a series of articles featuring an in-depth discussion of programming principles, as they pertain to Video Game programming.  The topic this time is a very important principle - "Ya Ain't Gonna Need It" or YAGNI.

If you find you learn better via video, I have one (where my facial expression captured in the auto generated thumbnail turned out pretty hilarious) below.

What is YAGNI?

This principle comes into play when you're programming something that you're guessing you might need at some point in the future. The basic premise is that generally speaking, you aren't going to need it, and time spent writing this code was a waste of effort.

On the whole, this is a time management principle, and one that I find comes up commonly in games, a field of programming that frequently suffers from the project management holy trinity of feature creep, competition anxiety, and short deadlines.

This actually happens?

Like many of these principles, it sounds obvious. Why would you ever write something that you might never use? But in practice, this is a very common problem that can cause enormous inefficiencies and can be more difficult to solve than you might initially think.

Further, it is sometimes difficult to identify when you're violating this principle. If you're busy slaving away at your desk while some other programmer is spending half the day playing ping pong and eating office snacks, you clearly feel like the more efficient employee.  However if the code you are writing will never be used, then sadly, that slacking programmer is potentially vastly more productive than you.

There are two ways this problem usually manifests. One is simple to resolve, the other is more difficult.

The easy case

The more obvious case is when you are developing a new feature that you don't really have a current use for.  You're doing this on the presumption that you might use it at some point in the future.

For example, say you're busy making a first person game. You read some article with a great technique on how to implement a third person camera, and you decide to implement it despite that your game doesn't really need this functionality.

You'll start to justify your decision.

  • It will probably be useful for debugging

  • It will only take me an hour

  • The article is fresh in my mind, and I was just working in the camera code recently, so it's faster if I do it now

  • If we ever need a third person camera, then it will be already ready and waiting

  • It's a good learning experience

  • I'm excited to work on it, and it will be fun

They all sound like good reasons! Quite convincing really! And lo you will have just talked yourself into what's most likely going to be a waste of time and effort.

Why is it a waste of time?


Only the last two reasons from the above list are valid. Writing code for fun and education is an excellent pastime that I thoroughly endorse. You might even argue that writing some code for fun is worthwhile as an occasional sojourn that reinvigorates your spirits and boosts productivity in the long run. Of course, do this too much and a looming deadline will begin to disagree with you.


The other reasons, the ones you likely told your project manager, are patently ridiculous. I've seen projects become extremely derailed in situations where unchecked programmers have been spinning their wheels with seemingly zero friction.


What's wrong with the other reasons?


  • You're taking a gamble that it might be useful

  • Re-familiarizing yourself with the article won't take long. It will be just as fast to implement this later when (and if) it's needed

  • Be realistic, it's definitely going to take longer than an hour

In addition to the time sink,

  • You will have increased the complexity of your code

  • You have potentially introduced bugs

  • There is now more code to maintain in the future


Solution to the easy case?


I said it was easy - and it is. Just defer working on unnecessary features until you actually need them. Let's move on to the more difficult case.


The hard case


The harder case is when you're working on a feature that you know you need, but it's not clear how much of the logic should be hardcoded versus architecting some more robust system that affords huge customizability for the game designer.


It is typically more time consuming to write a highly architected system than hard-coding some basic logic, but there are cases where architecture provides a much more graceful solution and gives a net benefit to efficiency.  So how do you decide what to do?


Obviously each game is different and there is no single solution for all situations. Nonetheless, this article is about overarching principles, and YAGNI gives us a solid ground rule to follow: Start with hardcoding, and scale up from there.


Let's take a look at some examples to see why this is effective.




The most common area of game code where I encounter this problem is Artificial Intelligence. Having seen many AI systems for a variety of games big and small, I've encountered varying levels of robustness producing similarly varying degrees of success. 


When I say Artificial Intelligence I'm referring to behaviour of enemies, NPCs and the like. Consider this example archetype from a cover shooter:


The enemy runs to a cover point in range of the player, takes cover, shoots periodically, and retreats to a cover point farther away if the player gets too close.


You can break this into two chunks:


  • Actions - Lower level systems that are often shared across enemies and involve no decision making: movement, taking cover, shooting

  • Behaviours - Higher level sytems that are often unique across enemies and are decision focused: When do I shoot? When do I run? When do I take cover?

The actions are simple. These are usually hardcoded with certain variables customizable by a game designer such as movespeed or attack range.



Behaviours are where things start to get fuzzy. You can imagine some example hardcoded implementation (that you really shouldn't take too seriously because it's just an example and no real AI code would ever actually look like this):


if (distanceToPlayer < FLEE_RANGE) { 


} else if (inCover() && distaceToPlayer < FIRING_RANGE) { 


} else {




As the YAGNI principle suggests, this is probably a good starting point, and can even scale to many different enemy archetypes through appropriate use of inheritance and composition. 


But when you have a huge number of enemies and want to afford greater flexibility to your game designer, more of this logic will need to be driven by data. You can start to imagine how such a system would look. Games like Dragon Age have even exposed such a system to the player.


class Rule {

   Attribute attribute

   float value

   Action action

   Operator operator


   boolean evaluate() {

       boolean success = false

       switch (operator) { 

           case LESS: success = mAttributes[attribute] < value 

           case GREATER: success = mAttributes[attribute] > value 

           case EQUAL: success = mAttributes[attribute] == value 

           //.. etc.


       if (success) { 



       return success




You would then define some rules to govern behaviour on a given enemy through data in a priority list.


NormalEnemy {







Looks pretty graceful, doesn't it? Until of course you add your boss battle that needs ATR_IS_HOLDING_SPEAR_OF_TRIUMPH, and ACTION_PIERCE_ROCK_OF_DESTINY_WITH_TRIUMPH_SPEAR_FOR_GREAT_JUSTICE_AND_VICTORY.


All joking aside, a system like this can be very useful and has its place. Deciding to make a system like this could potentially save you large amounts of time. What I recommend as a general rule however - don't start here.  Scale up to it.


Why scale up?


Some games, like Dragon Age, the AI tactics are part of the design and you will likely jump right into making a robust system like the one above.  But when you're unsure, it's a safer bet to start hardcoding and scale up from there if necessary. This has several advantages:


  • You will have a working prototype faster

  • Making the hardcoded system will help inform the design of your robust one

  • You may not need the bigger system

  • It is very difficult to scale down if it turns out you don't need it

  • An overarchitected system can be cumbersome for coders and designers

An Important Caveat

It may be tempting to use the YAGNI principle as a justification to avoid writing clean code. Your argument might be that it will take so much longer to write my code cleanly and we aren't going to need it, so therefore I shouldn't bother.

Unfortunately this logic is predicated on the false assumption that writing unclean code is somehow faster than just writing it cleanly to begin with.  Don't use YAGNI as an excuse to copy/paste your code or violate other various clean programming practices. Use it to avoid unnecessary features or overarchitected systems.

To Summarize


For new features you should be doing a cost benefit analysis wherein you ask yourself:

  • Do I need this right now?

  • And if not, does it cost anything to defer the work until later?

Usually, the answer is no - you should deferring the work. 


And for architected systems, a good rule of thumb is to err on the side of simpler code with the intent to scale up later if necessary.



I wanted to touch on one more subject as it is closely related to the YAGNI principle, and that's optimization. You've surely heard the phrase "premature optimization is the root of all evil" and it's absolutely true in many cases. 


(the full quote is: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%" - Donald Knuth, Computer Programming as an art, 1974)


The basic premise is that it is a waste of time optimizing something that may ultimately be deleted, or turns out to be not a huge consumer of performance in the end. Premature optimization also suffers from the other problems I already mentioned of potentially reducing clarity, introducing bugs, or increasing complexity, for little if any benefit.


However, there are real cases where it makes sense to optimize as you're writing, and deciding whether you should be optimizing or not is a big subject that deserves its own article.  That being said it's still related to the YAGNI principle so I wanted to at least mention it here briefly.



That's all folks - this is a pretty broad subject that rears its head frequently, has potentially huge efficiency consequences, and of course offers many angles for further discussion.  Hopefully this introduction will at least familiarize you with the concept and give you some ground rules that can put you in a good position to make effective decisions regarding YAGNI, worst acronym ever.

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like