You can read more of my writing over at the Meeple Like Us blog, or the Textual Intercourse blog over at Epitaph Online. You can some information about my research interests over at my personal homepage.
One of the topics that often bubbles up in the rotating ‘issue of the day’ conveyor belt of online discussion is that of reviewers and play counts – specifically, the question of whether reviewers should include these as part of their review content so as to let people decide how seriously they should take their views. I refer you to this recent Reddit thread as an example of the discussions around the issue. We don’t include play counts on Meeple Like Us, and I’m going to use today’s editorial to talk about why.
First of all, let’s outline the case for play counts so that we can get a reasonable idea of why people might like to see them. For this I’ll directly quote rather than paraphrase to let the advocates for the position make their points without my potentially misinterpreting their opinions. Let’s go with the original poster of the thread linked above as a start:
I know there are some reviewers who do this, but I wish everyone would. I really think it provides important context. And on that subject, I do honestly believe you should play more than once to review a game, especially since so many game have variable player powers, starting postions, set ups, etc.
There’s a world of difference between three identical steam reviews, one from someone with 1 hour played, one with 20 hours played, and one with 200 hours played. That’s important context. If 1 hour played guy is complaining about balance issues I’m likely to mostly disregard those balance complaints, and rightfully so.
So many people in this thread saying it’s a bad thing because people would complain about it (“you haven’t played enough” or “you played too much”), but people are already complaining anyways when a review goes against their opinions. I prefer more data, even if I don’t take all of it into account when reading a review.
I actually do find this valuable for a variety of reasons. First is how deep did they go on the game – if it’s something with a ton of variety, it could help to know how truly varied that content is (both in terms of balance and gameplay mechanism). If it’s something without a lot, it’s just as important to see if what’s available will actually hold up to repeated play sessions and not just become stale.
Someone who has played 5,000 games but only played them twice is going to have a completely different review than someone who has played 50 games but has played the current game in question twenty times. Its up to the customer to research both types of opinions, imo, and I don’t really see an issue with that.
These aren’t all the comments in favour in that thread, but they’re the ones that are a) about this specific topic and not tangents, and b) are reasonable and not overly drenched in Reddit snark. Together they form a perfectly fair proposal with a lot of merit. There absolutely is a difference between a first impression review and a review after several plays. Structural problems in a game with regards to replayability and balance are sometimes only possible to determine after multiple sessions. Adding a note with regards to play count wouldn’t take long to do and would add a genuinely quantifiable value that would leave readers with more data than the alternative. All of these are statements with which I can agree.
I just don’t think they argue for the conclusion that ‘therefore everyone should add their play count to their reviews’.
A few months ago I posted a defense of review scores – I believe they are important as a filter for readers, and as a way for reviewers to be accountable for their own opinions. No equivocating is possible when you have to nail your colours to the mast. Sure, lots of people stop engaging with a review once they’ve seen the score but they were never your readers in the first place. Your readers aren’t drive-by traffic – they want to know what you have to say.
There are similar arguments to be made for including a play-count, and it seems on the surface like it’s the same issue and should resolve down to the same conclusion. It’s not though, and it’s because of where one finds the locus of its meaning. A review score is my summation – it reflects the final numerical scoring that I present to you as a formal part of a review. It represents a value that is the encapsulation of thought, consideration and occasionally a fair bit of adjustment over the course of writing. It represents me providing you with a judgement. I don’t just slap a number on a review and call it a day. I think about that number more than you might believe.
Play counts on the other hand represent me giving you what is essentially an audit. It requires no thought and no consideration. Many have argued that it provides context, and that’s true. However if you’re going to honestly provide a correct figure, it also provides no context about the context. Play means different things to different people and critical evaluation is a skill that gets better with use. A play count is a useless value for what people are seeking because it represents play sessions whereas what a review needs for credibility is invested analysis. I’d be more inclined to record ‘thinking time’ than ‘play counts’ if there was any way to accurately record it. Analysis means far more than the mechanical act of playing unless a review is focusing purely on surface level details. Some of that thinking time is likely to happen concurrently with playing time, but make no mistake – thinking time extracts value from playing time far more effectively than playing time compensates for a lack of thinking.
But even in that – some people just think better than others, y’know? Maybe I spend three hours thinking about a game and come up with conclusions to which someone else arrives after one. Balance issues are a particular example of that – there are people that see the exploitable strategies in front of them as soon as they learn the rules. Others might play for years before stumbling into them. I will identify many accessibility issues in a game the minute I set up a board. I can write a teardown on many games after the first time I play it. That’s just not going to be possible for many others. What does play count even mean in that context? It’s a number, but not a meaningful one.
The simple fact is that someone that doesn’t critically engage with a game won’t have a meaningfully different view after one hundred plays than they will have after one. Someone that plays once and then thinks hard about that game for six hours afterwards will have an awful lot more of value they can say than someone simply reporting on the play sessions.
That’ not to say that multiple plays of a game aren’t vital, because they are. Multiple plays are necessary to ensure that the raw material that goes into consideration of a game is drawn from suitably varied sample points. Repeated play helps ensure early freak occurrences and novelty don’t result critical over-steering. It helps guard against a victory or a loss unduly tainting appreciation. Some people simply enjoy games more when they’re winning, and it’s important to know if a game is still fun when you’re not. Reporting on a play count though doesn’t capture any of that. Despite carrying no meaning people will draw their own inevitably erroneous conclusions by virtue of its presence.
This is the crux of the issue from my perspective A review score conveys a meaning chosen by me. A play count conveys a meaning chosen by you, and you weren’t there when I played the game. Probably. Hi Pauline!
Others in the linked thread have pointed out it would be more valuable to see how often the game was played at different player counts, or for a reviewer to document their methodology for play. Again, these are undoubtedly quantifiable metrics but I think it also mistakes a provision of data for a provision of information. I don’t believe anyone is informed by any of these things. Instead, I think people are likely to be misinformed because fundamentally it all misses the truth of the matter – all genuinely critical reviews are inherently subjective, context-dependent, and driven by biases and personal preferences. You can’t turn a review into something more objective by filtering it through a flawed personal algorithm fuelled by unsophisticated data-points. It doesn’t matter how many data points you get, or how you weight them. Garbage in, and garbage out… and I think play count is, in this context, a garbage measure.
A better measure is play count multiplied by play variety multiplied by thinking time multiplied by thinking quality. Even in that the numbers you could assign to any part of that would be garbage measures and what would come out would be a garbage calculation that told you nothing. I’m willing to bet I tell a lot more from a single play of a game than someone that hasn’t been reviewing games on the internet for the past few years. Game literacy is like literacy in anything – you learn the patterns, tropes and clichés. Some people might be able to tell you about the rules. Some, through virtue of experience, can tell you why those rules exist. Others, and this is more important for a review, call tell you how those rules interact to create or facilitate a feeling or an experience.
So, how do you arrive at a level of reliance on a review if reporting on these metrics has no value? How do you know a reviewer has a played a game enough that you can trust their opinion? How do you know they have enough experience of play to give their review the weight you want?
You don’t, but at least you’re not left in the position of thinking you do.
The Internet has created a culture where quantification is considered a goal in and of itself, and an almost fetishistic belief that numbers are the same thing as meaning. We value what we can measure, but we can’t always measure that which should be valued. This leads to a shallowfication of knowledge. The cursory nature of online engagement means that these numbers are often taken in isolation, ripped out of their context, and lauded or loathed as a product in and of themselves. When they are brought together in a new context it’s often without respect for the original nuance.
That’s a big problem because nowadays people don’t build the same kind of personal relationship to independent critics as they once did. Larger sites and review aggregator outlets are the clearing houses of criticism and this means that things often feel disjointed and impersonal. Relatively few of the larger review houses have individual critics sufficiently differentiable or prolific that they can have a predictable voice. Their coverage can seem almost willfully contradictory as a result.
This disconnect between reviewer and reader is critical in understanding the call for this kind of data. People are asking for the important meaning that alienation has washed away. What they want though can only really emerge with engagement. Without the host environment of a larger body of work, secondary values such as trust or even respect are effectively amputated. The raw numbers – designed to live in a particular soil – are given life and importance of their own when they don’t warrant it. I may have disagreed with Roger Ebert on a large number of films, but I had a personal relationship with his work that ensured it retained meaning even when his opinion diverged from my own. That doesn’t exist so much now, and even the genuinely high profile reviewers are still rarely household names because their outlet is so much bigger than them. You don’t get Rodney Reviewer or Carla Critic’s view of a game. You get a number that IGN spat out that was then filtered through Metacritic into something so distant from its meaning that it may as well be randomly generated. In my defense of review scoresI talked about that and noted the dangers of quantification and attempting to compare the incomparable. I live with that in review scores because at least it’s being done with a number to which the author assigned meaning. It’s preposterous to hope anything of worth can emerge from a number which derives from the soulless and perfunctory audit implied by play counts.
A secondary consequence of the reviewer-audience disconnect is that much of the culture on the Internet loathes criticism. They have lost, or abandoned, the sense of their own important role in proceedings as well as the connection with the individual voice of an individual critic. You often see people on Reddit or BGG pompously dismissing the work of reviewers by saying ‘Just read the rule book’. It’s definitely true that for some reviews that’s probably just as good in real terms. Bad reviews are bad reviews and they probably merit this response. However, there is rarely any sophistication in this pronouncement – rarely any target-finding. It’s often just a blanket dismissal of the entirety of all critical output. What you’ve found in those circumstances is someone that is admitting to their own failure or inability to engage with the criticism they have read or watched. They see no credibility in reviews because they have mistaken what the best reviews are supposed to achieve. More critically, they’ve also fallen foul of believing that critical work is for idle digestion.
A review should be more than something you passively consume. It should be something with which you critically engage. If you want to get the real meaning of a review, you need to engage in a creative act of interpretation of your own. If you pay attention to a review, at least the ones that aren’t terrible, you’ll find that there’s more going on that the word-count might imply. People ask for play counts for many reasons, but for some it’s because they’re not willing to actually evaluate the meaning of what a review is saying.
Reviews report on their play count all the time if you just pay attention. Paying attention means giving a review more than a cursory once over, and also considering how it fits in with the larger body of a reviewer’s work. I know that sounds a lot more intensive that me telling you how many times I’ve played a game, but it’s the only way to get the actual information that is being sought. Believe it or not, people do trust me to guide them to the best game purchases for them – and that’s because we have a connection. It may be parasocial and largely one-way (save for comments and our Patreon Discord), but make no mistake – that connection is vitally important for a reviewer and few that take the job of criticism seriously would throw it away.
One the remarks I made in the linked Reddit thread is that if the text of a review doesn’t convince someone the reviewer played it enough, a play count won’t add anything new. It supplies no new critical content, and importantly – as discussed above – adds no actual context. It might create in someone a sense of confidence but that’s entirely an illusion constructed in the reader’s head.
It’s going to be obvious in a review that is actually worth that description .whether someone is making an off-the-cuff judgement or a critique that is informed by deeper experience. If a reviewer discusses replayability in something other than the most surface way, there’s your evidence of multiple play-counts. If they discuss how the game works at different play counts, there’s your evidence of the same. If that isn’t sufficiently meaningful or informative, a play-count doesn’t alchemically transmute it into something better. If a single number in a review is expected to do critical heavy lifting, it better be the result of some serious thought and consideration of its own. A play count just isn’t up to that task.
So, how many times should a reviewer play a game before they review it? It turns out the answer was there all along – they should play it enough times. For some reviewers and some games, that will be once. For others it might be dozens of times. Reviewers should play games until they feel their experience has equipped them with the perspective to do the game justice in an analysis. The only person placed to make that decision is the reviewer, and if that reviewer is any good they’ll take the job seriously enough to reach that point of enough.
In the end, reviews have to be a collaboration of trust. I trust that you’ll read what I have written and engage with it. You trust that I will write something worth your time and make the effort to ensure I’m able to do that. Know my methodology and my diligence through my writing, because you can’t get an appreciable sense of it any other way. Don’t ask me to insult you with something as meaningless as my play counts. You’re too important to me to try to mislead you in that way.