[In this reprinted #altdevblogaday-opinion piece, Blitz Game Studios' technical manager Lee Winder reviews the pros and cons for several code review methods his team has experimented with over the years.]
In my last couple of posts I've covered The Benefits of Code Reviews
and Why Code Reviews Can Fail
. But code reviews can be carried out in a range of ways and how code reviews are done can produce a variety of results.
So, I wanted to follow those by covering the different review methods we've used in the past and the ways in which each one worked or in a number of cases didn't.
Just to note that in the previous posts, people have commented on ways they've carried out reviews. I'm not covering those here as I want to cover methods I have explicit experience of, so please comment on any methods you've used at the end of the post.
One of the simplest methods we used was to take the code you want to review, print it out in whatever format you want (we found an A5 booklet worked best), and pass them around. Usually people would be given a couple of days to review the code before everyone involved would sit around a table and discuss what they had found.
The advantage to this is that it gets people talking and it focuses the discussion within a specific time frame. You're sat around a table together, and it's a great way to get people participating and interacting with each other.
It also makes it easier to review larger code blocks, especially interacting elements. Often people would pull up the code in an IDE based on what they'd seen in the printed out code, and allowed them to make personal notes before the review.
One of the negative elements we found was this it seemed to force people into thinking they must
contribute rather than simply being happy with what is in front of them. Often this resulted in comments which I suspect would not have really been an issue if it was carried out in another way.
People also had a tendency to review the code 30 minutes before the review meeting, which sort of rendered the longer lead in time pointless.
It's also a drain on resources and time. It's surprising how long it takes to get code from an IDE into a suitable (and readable) format, especially if all you have is a word processor. And wasting all that paper is something I'd like to avoid.
Smaller, per-change reviews were never reviewed in this way. Getting people in a room to review smaller changes is simply not time efficient.
One of our most short-lived methods was posting code onto an internal forum when the review didn't justify a full sit-down session. This worked very well in making the code easily visible – even to people outside the review group, though that might not be suitable in every case. It also reduced the time it takes to set up and cuts down on waste.
But discussing specific parts of the code simply didn't work. Referencing line 316 of PlayerCamera.cpp in a post makes it very hard for people to link to the work, and while forums are good for limited threaded discussions, trying to quote and re-quote just becomes hard to track, read, and find out what needs to actually change.
Like I mentioned at the start, this was a short-lived idea for pretty good reasons.
Automated Code Reviews
By automated I mean using a tool that has been created specifically for code reviews, one which can display code correctly, generate or show diffs, and allow people to reference files and lines of code easily.
Of course, when using automated tools, you have more scope as to when to do you reviews (do you review individual diffs, whole files or something in-between?), and I'll spend the rest of this article going over the different methods we've used so far.
Blocking Code Reviews
: Blocking code reviews are used when you want every piece of code reviewed before it is even committed to the repository. When a developer has made any changes to the code base, they fire off a review and the developer cannot check in their work until everyone in the review group has flagged it as shippable. Any changes or requests need to be done before the code review can be submitted and checked into the repository.
This didn't work. Developers need to get into a work-flow, and having to constantly stop and wait for permission to continue or to review someone else's code stops them in their tracks and is a serious drain on their productivity.
But it could be improved. The blocking edict only focused on work committed to the trunk. Working on an individual branch, where check-ins are free and quick, and then putting those up for review when merging to the trunk would mean developers could continue to work 'in flow' before having the merge process include a review step.
Also using a source control system that allows shelving or pushing of changes to a local storage before committing to the trunk would avoid the stop/start nature of this method.
Non-Blocking Code Reviews
: The next obvious step. Every check-in needs to be reviewed so the review is generated, sent off to a selected group, but the developer is then free to commit the code with any changes or improvements being done in another commit.
This is beneficial on two counts. The original developer is able to continue working, either on the same area of code or another without it conflicting on any waiting commits, and the reviewer is able to continue their work, leaving any reviews the need to complete for when it's suitable, maybe performing a group of them in one go.
While this worked well in regards to making sure that everything is reviewed, we found that you needed to monitor the size of the review groups and check that people are not being over-loaded. Filtering your reviews only to come to them at the end of the day with 10+ of them waiting for you can be quite challenging.
: This is generally where we sit now when it comes to diff-based code reviews, and it puts the responsibility back into the hands of the developer.
If the developer is working on a new feature, then they are free to check in their changes with a view to putting the whole feature up for review when it is finished. This greatly reduces the number of reviews and for new code allows the developer to put the whole code into context. It's very similar to paper-based reviews but in a more suitable environment.
But there is one caveat.
It only works if there has been a decent design stage to the feature being written. Having people bring up basic design flaws when the thing it practically finished can be a killer blow, but this is showing up the work flow process more than anything else.
This can obviously be extended to large scale changes that someone might be working on. The can check in their small changes, which a review done on the final product, maybe generating the diff based on a collection of commits rather than just one.
Bug fixes are another matter. Once the main implementation is done and people are into a bug fixing stage, then these should be reviewed as and when they are being committed (usually in the same manner as the non-blocking reviews). This is especially important when it's coming up to a milestone and is the way continually developed projects will eventually spend most of their time.
Obviously there are other methods we could try. Review buddies, review masters (one person assigned responsibility for checking all commits – sometimes this will usually be the lead or programming manager), e-mail based discussions, 'important changes only' reviews, and plenty of other variations on the same theme.
We haven't tried these because the automated reviews have been working very well for us so far.
Review buddies would be the one I'd like to try (alongside, rather than replacing, the automated review process), but I would usually have this done between two developers working on similar parts of the code – though I would hope these developers would be talking to each other already!
One thing I've not covered is who does these reviews, which can be just as important as what is being reviewed.
Initially we used review groups, which contained 2/3 developers and were mixed quite randomly. Every review would be assigned to a random group and fired off. While this worked well by spreading the work and the knowledge of the code base, it wasn't producing the best results.
Having one person involved in 2 out of 3 reviews for a particular feature, only for someone else to replace them on the last one would produce duplicated, or sometimes conflicting, results. And the new person isn't as up-to-date with the feature as the original set of reviewers, which can limit the scope of the review and lead to it taking longer.
While this does spread the knowledge throughout the team, it isn't necessary for everyone to know everything about the code base.
So now reviews are generally more focused, being sent directly to specific people who will have the best input on a particular feature. This does need monitoring to make sure people are not being over-loaded (or over-looked) but that's a pretty simple management issues.
People can also be asked whether they want to be involved in a certain review. Do they have a particular interest in that area, or does it use a new piece of technology that they would like experience with?
So that's a brief overview of the code review systems we've used over the past few years. We've settled on our automated code reviews as they produce the best results in the shortest time and give you much more scope in regards to how and what is reviewed. Choosing the correct piece of software to run this system is an important decision and I'll cover why we use particular tools to do our reviews in the future.
[This piece was reprinted from #AltDevBlogADay, a shared blog initiative started by @mike_acton devoted to giving game developers of all disciplines a place to motivate each other to write regularly about their personal game development passions. Title image by fatboyke. Used under license]