Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

Evolving Test in the Video Game Industry - Part 2

The second installment of Evolving Test in the Video Game Industry. This article focuses primarily on the move from a Waterfall to an Agile production methodology and how to adjust your test teams to meet this new paradigm.

Mike Burghart, Blogger

August 12, 2014

6 Min Read

In my last offering, I touched on the move from a waterfall to an agile development philosophy. Over the last ten years we've seen game studios attempt a number of approaches including SCRUM coupled with Kanban. For most, the current iteration is a hybrid of the two heavily influenced by company culture. The goal is straightforward: create iterative development processes that do not overly impact the pipeline. With ever evolving platforms, waterfall wasn't getting the job done.

You can read a more comprehensive discussion started by Seth Sivak about creating an iterative development culture here.

Agile with smaller work-unit control works for faster content development and distribution. But moving test to this new paradigm has proven problematic, as traditional game QA's static workflows and often cumbersome overhead cannot adapt fast enough to meet the needs of iterative development.  

QA's response to agile pressures: assign select game testers to sit with development teams. Conceptually this is on the right track, and has realized some returns over the last few years, but this approach is limited and fraught with risks:

  • Game testers focused on a career in development tend to identify with the feature team rather than the QA organization.

  • Development teams see the assigned tester as a resource to assign non-test-related work.

  • Game testers may not have the technical expertise to conduct a thorough issue root cause analysis.

  • No clear communication exchange occurs between the Development team and QA Leadership.

I know the last item seems counterintuitive, but without inclusive retrospectives, the assigned QA tester is accountable only to the development team, with no QA communication required on the critical path. This makes it tough for QA leadership to make adjustments to testing methodologies and requirements, let alone to share best practices across testing groups.

Collocated game testers alone are not enough to create sustainable QA processes, regardless of how talented your QA leadership team may be.  

Center of Excellence (CoE) adoption in the game industry

Over the last few years, as testers are distributed to feature teams, we have seen how QA leadership oversight degrades and best practices aren't shared across teams. One solution is the creation of a Center of Excellence. Other industries use this concept to ensure practices are current and relevant.

For the purpose of this discussion, we want to focus on best practices standardization and management.  Rarely will you find game development studios with a staff dedicated to the maintenance of practices and process. A 2008 post on Agile Elements defined  the CoE as "a team of people that promote collaboration and using best practices around a specific focus area to drive business results" serving five needs: support, guidance, shared learning, measurements, and governance.  This is one of the better definitions as it provides a framework for organization.

A proven approach

With a solid process foundation (CoE) in place, we can take a look at the makeup of the testing group deployed to the feature development team. As noted above, recent history has shown us that collocated testers alone are not the ideal solution. If an Engineer in Test and Software Engineer are added to the mix, test strategy, planning, and automated test creation get addressed.

Here is an example of what an effective organization might look like:

Engineer in Test

Other interchangeable names include Software Design Engineer in Test (SDET) and Software Quality Engineer (SQE)

  • Stakeholder collaboration

  • Risk Assessment

  • Test Planning

  • Test case creation

  • Automation opportunities

  • Automated test creation

  • Root cause analysis

  • At senior levels ...

    • Engineer in Test and Tester management

    • Code editing/creation based on identified risks or root cause analysis

 

Software Engineer

  • Creates test harnesses

 

Tester

  • Black box test case creation and execution

 

Note: this can look different from studio to studio due to culture

The team makeup can adjust to meet the needs of the feature, but the core group of Engineer in Test, Software Engineer and Tester should remain in place. This type of integrated testing team is a unique position to visualize the entire development pipeline and trace issues from the test cases that uncovered them all the way back through to the feature design phase.

Here is the workflow:

 

 

Test Harness Requirements

Automated Test Creation

Issue Evaluations

Initial Risk Assessment

Test Plan Creation

Test Case Creation

Manual Test Execution

 

  • Initial Risk Assessment is generated by Engineers in Test based on the requirements from a feature’s design phase driven by the experiences of previous releases. The initial assessment is then shared with business and technical stakeholders who review the list and can add, modify and remove risk areas. Risk priority is based on two factors: the technological likelihood and business impact if the risk occurred. Each risk is assigned a value in both categories and their aggregate dictates its position in the assessment. Ideally, we would work to mitigate the risks with the highest likelihood and impact first. To read further on this methodology, take a look at ISTQB’s Test Manager Certification curriculum.

  • Test Plans are generated by Engineers in Test working from the prioritized list of risks. Each risk drives a test plan designed to mitigate that risk.

  • Test Cases are crafted from the test plans by the Testers with Engineer in Test oversight. Automation opportunities are identified during this stage.

    • Test Harness Requirements are communicated to the Software Engineers based on the identified automation opportunities.

    • Automated Tests are scripted by the Engineer in Test to hang from the test harness.

  • Issue Evaluations are conducted by the Engineer in Test and Testers. The testing team can now leverage the aforementioned pipeline visibility by conducting root cause analysis of related and unrelated issues. Those root causes are communicated to development for correction or a senior Engineer in Test can edit the issue area and offer the new code to development for review before introduction into the main branch.

When we are done, we should be in a position to continuously run automated tests against main and script new tests based on issues identified during the initial evaluation. This can be a robust regression suite’s foundation.

While large QA departments at the end of the pipeline had some success in the past, they are ill-suited for a more aggressive deliverable expectation. Small modular teams of Engineers in Test, Software Engineers and Testers are better suited to meet the needs of an Agile development pipeline.  By integrating test (as detailed above) into the pipeline as early as the feature design phase, we lay the foundation for identifying issues an hour after creation versus a year. This will reduce agreed upon risk, increase decision maker confidence, and build the foundation of a robust automated regression suite.

Next up … leadership and effecting change without disrupting the landscape.

Read more about:

2014Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like