Tag Archives: karma

Review Board Statistics Extensions: Karma, Stopwatch, and FixIt

I just spent the long weekend in Ottawa and Québec City with my parents and my girlfriend Em.

During the long drive back to Toronto from Québec City, I had plenty of time to think about my GSoC project, and where I want to go with it once GSoC is done.

Here’s what I came up with.

Detach Reviewing Time from Statistics

I think it’s a safe assumption that my reviewing-time extension isn’t going to be the only one to generate useful statistical data.

So why not give extension developers an easy mechanism to display statistical data for their extension?

First, I’m going to extract the reviewing-time recording portion of the extension. Then, RB-Stats (or whatever I end up calling it), will introduce it’s own set of hooks for other extensions to register with.  This way, if users want some stats, there will be one place to go to get them.  And if an extension developer wants to make some statistics available, a lot of the hard work will already be done for them.

And if an extension has the capability of combining its data with another extensions data to create a new statistic, we’ll let RB-Stats manage all of that business.

Stopwatch

The reviewing-time feature of RB-Stats will become an extension on its own, and register its data with RB-Stats.  Once RB-Stats and Stopwatch are done, we should be feature equivalent with my demo.

Review Karma

I kind of breezed past this in my demo, but I’m interested in displaying “review karma”.  Review karma is the reviews/review-requests ratio.

But I’m not sure karma is the right word.  It suggests that a low ratio (many review requests, few reviews) is a bad thing.  I’m not so sure that’s true.

Still, I wonder what the impact will be to display review karma?  Not just in the RB-Stats statistics view, but next to user names?  Will there be an impact on review activity when we display this “reputation” value?

FixIt

This is a big one.

Most code review tools allow reviewers to register “defects”, “todos” or “problems” with the code up for review.  This makes it easier for reviewees to keep track of things to fix, and things that have already been taken care of.  It’s also useful in that it helps generate interesting statistics like defect density and defect detection rate (assuming Stopwatch is installed and enabled).

I’m going to tackle this extension as soon as RB-Stats, Stopwatch and Karma are done.  At this point, I’m quite confident that the current extension framework can more or less handle this.

Got any more ideas for me?  Or maybe an extension wish-list?  Let  me know.

Review Board Statistics Extension – Demo Time

If I’ve learned anything from my supervisor, it’s to demo. Demo often. Step out of the lab and introduce what you’ve been working on to the world. Hit the pavement and show, rather than tell.

So here’s a video of me demoing my statistics extension for Review Board.  It’s still in the early phases, but a lot of the groundwork has been taken care of.

And sorry for the video quality.  Desktop capture on Ubuntu turned out to be surprisingly difficult for my laptop, and that’s the best I could do.

So, without further ado, here’s my demo (click here if you can’t see it):

Not bad!  And I haven’t even reached the midterm of GSoC yet.  Still plenty of time to enhance, document, test, and polish.

If you have any questions or comments, I’d love to hear them.

Turning Peer Code Review into a Game

A little while back, I wrote about an idea that a few of us had been bouncing around:  peer code review achievements.

It started out as a bit of Twitter fun – but now it has evolved, and actually become a contender for my Masters research.

So I’ve been reading up on reputation and achievement systems, and it’s been keeping me up at night.  I’ve been tossing and turning, trying to figure out a way of applying these concepts to something like ReviewBoard.  Is there a model that will encourage users to post review requests early and often?  Is there a model that will encourage more thorough reviews from other developers?

An idea eventually sprung to mind…

Idea 1:  2 Week Games

In a speech he gave a few years ago, Danc of The Last Garden placed the following bet:

  • If an activity can be learned…
  • If the player’s performance can be measured…
  • If the player can be rewarded or punished in a timely fashion…
  • Then any activity that meets these criteria can be turned into a game.

Let’s work off of this premise.

Modeled on the idea of a sprint or iteration, let’s say that ReviewBoard has “games” that last 2 weeks.

In a game, users score points in the following way:

  • Posting a review request that eventually gets committed gives the author 1 point
  • A review request that is given a ship-it, without a single defect found, gives the author The 1.5 Multiplier on their total points.  The 1.5 Multiplier can be stolen by another player if they post a review request that also gets a ship-it without any defects being found.
  • Any user can find/file defects on a review request
  • A defect must be “confirmed” by the author, or “withdrawn” by the defect-finder.
  • After a diff has been updated, “confirmed” defects can be “fixed”.  Each fixed defect gives the defect-finder and author 1 point each.

After two weeks, a final tally is made, achievements / badges are doled out, and the scores are reset.  A new game begins.  Users can view their point history and track their performance over time.

Granted, this game is open to cheating.  But so is Monopoly.  I can reach into the Monopoly bank and grab $500 without anybody noticing.  It’s up to me not to do that, because it invalidates the game.  In this case, cheating would only result in bad morale and a poorer piece of software.  And since scores are reset every two weeks, what’s the real incentive to cheat?

Idea 2:  Track My Performance

I’ve never built a reputation system before – but Randy Farmer and Bryce Glass have.  They’ve even written a book about it.

Just browsing through their site, I’m finding quotes that suggests that there are some potential problems with my two week game idea.  In particular, I have not considered the potentially harmful effects of displaying “points” publicly on a leader-board.

According to Farmer / Glass:

It’s still too early to speak in absolutes about the design of social-media sites, but one fact is becoming abundantly clear: ranking the members of your community-and pitting them one-against-the-other in a competitive fashion-is typically a bad idea. Like the fabled djinni of yore, leaderboards on your site promise riches (comparisons! incentives! user engagement!!) but often lead to undesired consequences.

They go into more detail here.

Ok, so let’s say that they’re right.  Then how about instead of pitting the reviewers against one another, I have the reviewers compete against themselves?

Ever played Wii Sports?  It tracks player performance on various games and displays it on a chart.  It’s really easy to see / track progress over time.  It’s also an incentive to keep performance up – because nobody wants to go below the “Pro” line.

So how about we just show users a report of their performance over fixed time intervals…with fancy jQuery charts, etc?

So what?

Are either of these ideas useful?  Would they increase the number of defects found per review request?  Would they increase the frequency and speed of reviews?  Would they improve user perception of peer code review?  Would it be ignored?  Or could it harm a team of developers?  What are the benefits and drawbacks?

If anything, it’d give ReviewBoard some ability to record metrics, which is handy when you want to show the big boss how much money you’re saving with code review.

Might be worth looking into.  Thoughts?