Turning Peer Code Review into a Game

A little while back, I wrote about an idea that a few of us had been bouncing around:  peer code review achievements.

It started out as a bit of Twitter fun – but now it has evolved, and actually become a contender for my Masters research.

So I’ve been reading up on reputation and achievement systems, and it’s been keeping me up at night.  I’ve been tossing and turning, trying to figure out a way of applying these concepts to something like ReviewBoard.  Is there a model that will encourage users to post review requests early and often?  Is there a model that will encourage more thorough reviews from other developers?

An idea eventually sprung to mind…

Idea 1:  2 Week Games

In a speech he gave a few years ago, Danc of The Last Garden placed the following bet:

  • If an activity can be learned…
  • If the player’s performance can be measured…
  • If the player can be rewarded or punished in a timely fashion…
  • Then any activity that meets these criteria can be turned into a game.

Let’s work off of this premise.

Modeled on the idea of a sprint or iteration, let’s say that ReviewBoard has “games” that last 2 weeks.

In a game, users score points in the following way:

  • Posting a review request that eventually gets committed gives the author 1 point
  • A review request that is given a ship-it, without a single defect found, gives the author The 1.5 Multiplier on their total points.  The 1.5 Multiplier can be stolen by another player if they post a review request that also gets a ship-it without any defects being found.
  • Any user can find/file defects on a review request
  • A defect must be “confirmed” by the author, or “withdrawn” by the defect-finder.
  • After a diff has been updated, “confirmed” defects can be “fixed”.  Each fixed defect gives the defect-finder and author 1 point each.

After two weeks, a final tally is made, achievements / badges are doled out, and the scores are reset.  A new game begins.  Users can view their point history and track their performance over time.

Granted, this game is open to cheating.  But so is Monopoly.  I can reach into the Monopoly bank and grab $500 without anybody noticing.  It’s up to me not to do that, because it invalidates the game.  In this case, cheating would only result in bad morale and a poorer piece of software.  And since scores are reset every two weeks, what’s the real incentive to cheat?

Idea 2:  Track My Performance

I’ve never built a reputation system before – but Randy Farmer and Bryce Glass have.  They’ve even written a book about it.

Just browsing through their site, I’m finding quotes that suggests that there are some potential problems with my two week game idea.  In particular, I have not considered the potentially harmful effects of displaying “points” publicly on a leader-board.

According to Farmer / Glass:

It’s still too early to speak in absolutes about the design of social-media sites, but one fact is becoming abundantly clear: ranking the members of your community-and pitting them one-against-the-other in a competitive fashion-is typically a bad idea. Like the fabled djinni of yore, leaderboards on your site promise riches (comparisons! incentives! user engagement!!) but often lead to undesired consequences.

They go into more detail here.

Ok, so let’s say that they’re right.  Then how about instead of pitting the reviewers against one another, I have the reviewers compete against themselves?

Ever played Wii Sports?  It tracks player performance on various games and displays it on a chart.  It’s really easy to see / track progress over time.  It’s also an incentive to keep performance up – because nobody wants to go below the “Pro” line.

So how about we just show users a report of their performance over fixed time intervals…with fancy jQuery charts, etc?

So what?

Are either of these ideas useful?  Would they increase the number of defects found per review request?  Would they increase the frequency and speed of reviews?  Would they improve user perception of peer code review?  Would it be ignored?  Or could it harm a team of developers?  What are the benefits and drawbacks?

If anything, it’d give ReviewBoard some ability to record metrics, which is handy when you want to show the big boss how much money you’re saving with code review.

Might be worth looking into.  Thoughts?

3 thoughts on “Turning Peer Code Review into a Game

  1. Gregg Sporar

    Your first approach seems more viable because the primary challenge is obtaining (as with so many social media sites) quality content. In this case, that means comments and defects from *knowledgeable* reviewers. The drawbacks of leaderboards aside, without the ability to reward the time commitment via a reputation system how will you convince folks to spend time reviewing the code of others?

    A useful model to consider would be Stackoverflow – it’s the same demographic and their community-based judgments about “who is worth listening to” provide a valuable incentive.

  2. Jorge Aranda

    I love games and therefore I love the first idea, but I’m also concerned about its unintended consequences.

    The problem is (might be) that software development itself can be seen as a cooperative game (see Alistair Cockburn’s Agile Software Development book, for instance) and any other internal game that gives incentives to behaviour that doesn’t benefit the larger game is undesirable. So the question is whether the reputation/achievements system would bring the behaviour that you want (more committed code reviewing, which should lead to higher-quality code) without bringing too much undesired behaviour (such as people attempting ridiculous hail-mary commits to get their Code Ninja badge).

    Someone told me that a group at Microsoft had a visual display where everyone could see how many bugs were assigned to each team member. The list was sorted by bug quantity; nobody wanted to be on top. But at least one developer was focusing on fixing bugs obsessively, instead of focusing on the goals that his manager needed him to focus on.

    Tracking individual performance is probably a safer and more useful idea. There are still risks: even when I track my performance individually and in private I can’t help but try to optimize for the measurement, no matter its original intention. This works well if what I’m measuring is directly related to the goal (such as running speed or juggling time) but not as well if it isn’t (such as when I measure hours worked per week).

    Sounds like a great MSc topic.

  3. Blake Winton

    For idea 1, you could make it like the Diablo II ladders, where the scores are updated, and the players are ranked, in real-time. Then periodically the ladders are reset, and a new round begins. (Re-reading, that might have been what you meant, but I got the impression from your description that you would only publish the scores once every two weeks.)

    But either of them sound like a good idea to me. (Perhaps it would be easier to integrate into ReviewAnywhere, though.)

Comments are closed.