Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

5 questions every QA team Lead or Manager should ask daily

Is there a secret formula to effectively measure the performance of a QA team?

Mathieu Lachance, Blogger

July 13, 2015

5 Min Read

Image provided from Dailywallppr.com.

Copyright dailywallppr.com

At one point or another, every Lead, Manager or Producer overseeing a QA team asks himself one question: How do I measure the performance of my team? I did precisely this 8 months ago.

I took the logical route and looked at the most basic level of my teams’ work. QA teams mostly:

  1. Create test cases,

  2. Work through test cases

  3. Find & report bugs

Accordingly, the first idea I had was to get my staff evaluated on output:

  1. Number of test cases written

  2. Numbers of test cases ran

  3. Number of bugs found, written and reported

As many of you probably already experienced, the issue with these approaches is that they sometimes lead to unwanted or extreme results that only exist because the teams adapt to their evaluation criteria, like:

  1. Many separated test case lines that could easily have merged together

  2. Test cases that were rushed through improperly

  3. A very high number or irrelevant, repetitive or badly-written bugs

This is very similar to the age-old problem of evaluating a developer based on the amount of lines of code he wrote. It doesn’t work. An elegant and potentially optimized code could be built in 5 lines instead of 50.

What are the alternatives then? Well, there’s a hard way and there’s an easy way. Hold on to your hats; it’s about to get technical!

The hard way is through a QA equivalent of Overall Equipment Effectiveness (OEE). In most metric-based production companies, base-level operational efficiency is defined by the OEE of the allocated resources or a derivative concept of it. OEE is calculated by multiplying the Availability, Performance, and Quality of the allocated resources or, in simpler terms:

% Team OEE = % Uptime x

                          % Speed of the work x

                          % Quality of the work

If we take the example of a test team following very basic test cases and only evaluated on the bugs they enter:

% QA OEE = % of time the team can work x

  % of bugs the tester found vs what is expected in that amount of time x

                      % of bug validity, quality, severity and relevance

The positive side of this approach is that it provides you with 4 different Key Performance Indicators (KPIs) which helps you find the source of some issues and which can be improved upon separately, repeatedly and methodically. The downside however is that even though the value looks precise, it can be very subjective to the evaluator or greatly influenced by the phase of development of the game depending on what is expected regarding work volumes, quality, relevance and severity. Furthermore, as you quickly saw above, it can be quite complex and, speaking from experience, can often become scary to a lot of QA Leads and Managers; At our Babel Media studio in Montreal , I hit a wall with my teams when I started hinting at developing this approach; it hindered KPI and metrics implementation quite a bit.

The easy way is to think of this matter at the most basic level; is my client happy?  So… what makes a client happy? Meeting a client’s expectations will do the trick, as long as the expectations are valid and well-managed obviously.

Note here that clients can be both internal and external to your organization. In the video-game industry, the external clients are going to be the people who will play your game. The internal ones are likely going to be your bosses and/or whoever you deliver the bug reports to. Based on the above, your main goal then becomes making sure that a game is both clean of issues and enjoyable.

To achieve that on QA’s side, it mainly means having the end-users experience as few bugs as possible. On the plus side, it’s a simple metric; to get the data, one only has to go through videogame review sites and forums and see if there’s anything the team didn’t find, and to simply ask internal stakeholders if they’re happy with the work your team does considering the given project context. The flip side of this approach though is that it doesn’t always take into account the context of the project or the performance of the development team and the potential impact on the QA team. As an example, a QA team could perform brilliantly but if the team’s size is too small for the project or if a development team creates as many issue as they fix on a consistent and recurring basis, they potentially may not fulfill the client’s expectation or could be seen as underperformers.

So, what’s the magic formula? There’s none really, but here’s what we thought was the best approach and did with our teams; we applied a simplified mix of the last 2 points along with some good old management instinct. If you want to apply a similar method on your side, here’s how: You and your colleagues should be asking yourselves 5 simple questions daily and then either provide feedback, training or help to your employees, or fix any applicable hindrance accordingly:

  1. Is my team facing any downtime? i.e. Can my team work all the time? (Uptime)

  2. Is my team going through enough bugs or test cases? (Speed & Performance)

  3. Are the bugs and test cases entered valid, well-written, free of any errors and relevant? (Quality)

  4. Did the team go through the test cases appropriately or did they rush through them? (Quality)

  5. Are my clients’ expectations managed and fulfilled, and is the game clean? (Client Satisfaction)

You can go delve in the more precise numbers and percentages of the OEE if you want to, but the time you take to actually get metrics will sometimes not be worth it when compared to the time you could take to work directly on the specific issues, hindrances or blockers raised.

What about you? Do you agree with the above? How is your QA team’s performance measured?

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like