[In this detailed opinion piece, veteran programmer Jake Simpson explains the 'most loathed' game programmer tests often used as part of game industry interviews, outlining possible methods and his recommendations for good results on both sides.]
Programmer tests are generally one of the most loathed parts of the interview process, on both sides. But every game programmer interview should include some kind of test to make sure the applicant can walk the walk as well as talk the talk.
There are a few types of tests a programming applicant can expect to see. The first is a pre-interview test, which may be given by email and may either come before or in conjunction with a phone interview or screening.
The second is an in-house test, which is given as part of the face-to-face interview and is completed on the spot.
The last type is a take-home test that's given after the interview, which asks the candidate to complete longer assignments that are usually very closely connected to the day-to-day work the applicant can expect to see when employed, although these are more generally given to content creators (creating a level and so on) than to programmers.
How these tests are handled matters immensely, both for the candidate being grilled and the panel of smug arms-folded engineers doing the grilling. Are the interviewers asking the right questions to get the answers they need to make a good judgment call? Or are they just trotting out their favorite trick questions so they can feel vaguely clever that they know the answers and the applicant doesn't?
Now from the point of view of the technical testing, this engineer is firmly of the opinion that pre-testing is the way to test basic programming competence.
With most job applicants, the interviewers have only one day in which to base a judgment call that can impact the applicant's life. Changing jobs can often mean moving, packing up family and so on -- and for the interviewers too, since they'll be working with the new hire day in and day out for possibly years.
Sticking an applicant in a room to complete a technical written test, letting her chew her pencil and desperately try to remember what C++ operators can't be overloaded, is probably not the best use of that small amount of time.
Technical tests need to be like filters. They need to help the hiring company figure out which applicants they should spend their time and money on bringing in house.
Be aware though, that this process is negative filtering. Someone who does well on a technical test isn't necessarily a good programmer and won't necessarily fit with the group, but someone who does poorly definitely won't be a good programmer.
The idea is to find out whether that someone at least sounds technically competent before they set foot in the studio. At that point, the hiring company is making a few assumptions about the person and can safely move along to other things when the interviewee shows up.
What is a Written Technical Test?
The first thing is to understand the purpose of a written technical test. Its purpose is to
1. test domain knowledge
2. test general programming acumen and
3. give the interviewers an idea of the candidate's experience.
It's not designed to reveal how applicants think, uncover their deep knowledge of STL edge case implementations, or see how they react to logic problems.
What's in this Mythical Written Test?
Written tests usually contain a range of questions. They typically include:
* algorithmic questions ("Write a function to do such-and-such.")
* language questions ("What's the construction order of an inherited class?" - whatever that language is)
* domain-specific questions ("What does a vertex/pixel shader do?" "What's the equation for specular lighting?")
* and basic trig math ("What's a dot product, and what is it used for?").
When it comes to the basic trig math questions, it's considered good practice to allow the candidate to write solutions to functional problems in a given range of languages rather than just C++ or C#.
Domain-specific questions are usually broken up into several sections, which might ask the applicant to answer all the questions in just one of these sections, since it's not right to expect graphics programmers to understand A*, for example. The domain-specific stuff is the hardest to write since only the hiring team knows what their studio requires. Sometimes questions can even be written for the specific person being interviewed.
How deep the questions go is a shop-by-shop decision. But be aware that just because the quiz-makers can answer their own questions doesn't mean they should expect the average programmer out there to be able to as well. This can be a problem in some programming tests, and applicants shouldn't be discouraged if they encounter this problem. It's not about (or at least it shouldn't be about) how clever the test maker is. It's about how clever the company expects its prospective employees to be.
Another problem that sometimes arises in programming tests is a logic bomb trick question. A classic example is a question that asks about the weight of an anchor in and out of a boat. With these kinds of questions, either the respondent has heard the riddle before or hasn't, and either knows the answer or doesn't. Because most respondents can't work out the solutions on the spot, the questioners don't get anything out of asking -- except maybe a little power trip. Plus, nothing is gained if the respondent does answer correctly.
When I create a programmer test, I tend to intersperse some essay questions with code ones, requiring the candidate to use his or her own voice in some of the responses.
Some applicants will cut and paste directly from the internet, which is what the essay tests are designed to catch - mostly direct cut and pastes are obvious since the style and verbiage drastically changes in each answer. Even if the applicant converts other peoples answers into his own words, that's ok, because you have to understand the answer in order to do that, which is what the test is about in the first place.
Speaking of the internet ... Some companies administer take-at-home tests, which some people fear allows the job candidates to look up answers. Well, sure, but that's fine. That's how they're going to be working day to day, by looking stuff up, so why not let them do it on a test, too?
A good programmer is not defined by whether she remembers the code for a dot product; it's defined in how she uses it. There's definitely some value to programmers who understand the root of what the equation means, but that's very hard to test in a written exam. That kind of stuff needs to be done on site.
What to Expect
Timed tests should be the norm, whether given in-house, via email, or as a take-home exam. When not done on site, these timed tests may give the interviewee a few hours or even a full day to turn the test around. Companies generally don't give applicants more than a day or two to complete them, as the goal is for the applicant to be able to solve the problems or look up the answers independently, not troll the internet for days on end or have someone else answer the questions for them.
Something else applicants might see is a section on each question that asks how long the candidate predicted it would take to answer the question, how long it actually took, and the relevance they judged the question to have. Applicants may be asked to send back answers to their "predictions" immediately so that they cannot go back over them afterward and make the answer reflect the actual time taken.
A feedback section for open comments might also appear. These sections give the employer further clues as to how the candidates think and communicate, as well as giving direction to how they can more finely tune the test in the future.
Finally, the most important thing from the employer's perspective is to be able to interpret the results. What do the answers mean? Only the exam-makers can judge what level of answers are accepted. Companies that are very experienced at giving programmer tests often have several different people grade the tests rather than just one, which helps to ensure that the grading is fair.
[Jake Simpson has been in the industry for longer than he cares to remember, although if he starts waxing on about working on Elisa, just buy him another beer and tell him to stop talking rubbish. He's been involved in making games for everything from the Commodore 64 all the way to the modern PC, with stops along the way to work on arcade machines for Midway in its golden era. Currently he works for Linden Lab making a second life for everyone, although as he confesses he often has his hands full with just the first one. Email him at jakesimpson100(at)yahoo.com or through his blog http://blog.jakeworld.org/.]