Category Archives: Extensions

Stallin’…

I know, I know.  I left you all hanging at the edge of your seat with my last blog post, and I still haven’t posted my idea for recognizing good code review.

I’m bogged down with school work, and I’m aiming to have the first draft of my research paper done next week.  So that’s taking 100% of my resources.

Just be patient.  I’ll post my idea soon.

Recognizing Good Code Review

While the benefits of code review are proven, documented, numerous and awesome, it doesn’t change the fact that most people, in general, don’t like doing it.

I guess code review just isn’t really all that fun.

So a few months ago, I broadcast the idea of turning code review into a game. It was my way of trying to mix things up – “let’s add points, and have reviewers/developers competing to be the best participant in the code review process”.

Well, if there’s one thing that my supervisor Greg has taught me, it’s how I shouldn’t rush headlong into something before all of the facts are in.  So before I decide to do something like game-ifize code review, I should take a look at some prior work in the area…

Enter this guy:  Sebastian Deterding.

In particular, check out the following slide-show.  Flip through it if you have the time.  If you don’t have the time, scroll down, where I get to the salient point with respect to game-ificating code review.

Here’s the slide-show. Be sure to read the narrative at the bottom.

The Salient Point

Sebastian seems to be saying that adding points to apps and trying to incite competition does not make something a game.  If it did, then this should be countless hours of fun.

Without play, there is no game. Points do not equal a game.  It’s not nearly that simple.

Free Pizza and Pop

I’m going to divert for a second here.

Last week, a company set themselves up a couple of booths in the lobby of the Bahen Center where I work.  They were there to recruit university students to work for their company – either as interns, or full-timers.

They were also handing out free pizza and pop.

Needless to say, I wanted a few slices – but I figured it would be polite if I engaged them in conversation before waltzing off with some of the free food and drink they’d brought.

So I sparked up a conversation with one of the recruiters, and he told me about the company.  I’m going to call this recruiter Vlad.

I ended up gently steering the conversation towards code review, and I asked my inevitable question:

“So, do you guys do code review?”

I felt like a dentist asking a patient if he’s been flossing.  Vlad waffled a bit, but the general impression was:

“Not as much as we should.  We don’t have a prescribed workflow. It’d be hard to persuade all of the teams to do it.”

And then we started talking about code review in general.  It turns out that Vlad had worked in a few companies where they’d done code review, and he always felt a little short changed.  He said something along the lines of:

“I never felt compelled to do reviews.  They just sort of happened…and I did it, and it felt like…unrecognized effort.  I mean, what’s the incentive?  Do you know what I mean?  There’s incentive for the software, but I’m talking incentive for me.  And some people did really lousy reviews…but my reviews were treated the same as theirs.  I didn’t get recognized, and didn’t get rewarded if I did a good review.  So it was hard for me to do them.  I want to be recognized for my good reviews, for my good contributions.”

I wish I’d had a tape-recorder running so I could have gotten Vlad’s exact words.  But that’s what I remember him saying.

Feedback and Recognition

Maybe instead of trying to game-ulize code review, I can instead hear what Vlad is saying and work off of that.

With the code review that Vlad participated in, all of the feedback went to the code author, and none went to the reviewers.  And the reviewers are the ones who are doing all of the heavy lifting!  As a reviewer, Vlad also wants feedback, and recognition for code review done well.

There’s a company in Toronto that specializes in feedback like this.  They’re one of the major players in the Toronto start-up scene, and have built a pretty sweet suite of tools to facilitate quick and easy feedback/recognition.

The company is called Rypple.  And maybe that’s the name of the application, too.  (checks website) Yeah, it’s both.

So Rypple has this feature called Kudos that let’s people publicly acknowledge the good work of their team.

Normally, I don’t pimp companies.  And it upsets me when people comment on my blog, and their sub-text is to try to sell their product or service.  However, I think this video is relevant, so I’m posting their demo video so you can see how Kudos work:

Click here if you can’t see the video.

The Idea

So Rypple’s idea is to have a feed that the team subscribes to, and publicly display things like Kudos.  The badges for the Kudos are also limited in how many you can give per week, so they’re a valuable commodity that can’t just be handed out all over the place.  Cool idea.

So there’s one approach – use a service like Rypple to give your reviewers better feedback and recognition.

Or maybe we could build an extension for Review Board that does something similar, and more oriented around code review.

It’s not oriented like a game, like I had originally envisioned.  But somehow, I think this idea has more meaning and traction than just “adding points”.

More on this idea in a few days.  But please, comment if you have any thoughts or ideas to add.

Filing Defects in Review Board

In my last post, I talked about an extension for Review Board that would allow users to register “defects”, “TODOs” or “problems” with code that’s up for review.

After chatting with the lead RB devs for a bit, we’ve decided to scrap the extension.

[audible gasp, booing, hissing]

Instead, we’re just going to put it in the core of Review Board.

[thundering applause]

Defects

Why is this useful?  I’ve got a few reasons for you:

  1. It’ll be easier for reviewees to keep track of things left to fix, and similarly, it’ll be harder for reviewees to accidentally skip over fixing a defect that a reviewer has found
  2. My statistics extension will be able to calculate useful things like defect detection rate, and defect density
  3. Maybe it’s just me, but checking things off as “fixed” or “completed” is really satisfying
  4. Who knows, down the line, I might code up an extension that lets you turn finding/closing defects into a game

However, since we’re adding this to the core of Review Board, we have to keep it simple.  One of Review Board’s biggest strengths is in its total lack of clutter.  No bells.  No whistles.  Just the things you need to get the job done.  Let the extensions bring the bells and whistles.

So that means creating a bare-bones defect-tracking mechanism and UI, and leaving it open for extension.  Because who knows, maybe there are some people out there who want to customize what kind of defects they’re filing.

I’ve come up with a design that I think is pretty simple and clean.  And it doesn’t rock the boat – if you’re not interested in filing defects, your Review Board experience stays the same.

Filing a Defect

I propose adding a simple checkbox to the comment dialog to indicate that this comment files a defect, like so:

Comment Defect Checkbox Screenshot

No bells. No whistles. Just a simple little checkbox.

While I’m in there, I’ll try to toss in some hooks so that extension developers can add more fields – for example, the classification or the priority of the defect.  By default, however, it’s just a bare-bones little checkbox.

So far, so good.  You’ve filed a defect.  Maybe this is how it’ll look like in the in-line comment viewer:

The inline comment viewer is showing that a defect report has been filed.

A defect has been reported!

Two Choices

A reviewer can file defects reports, and the reviewee is able to act on them.

Lets say I’m the reviewee.  I’ve just gotten a review, and I’ve got my editor / IDE with my patch waiting in the background.  I see a few defect reports have been filed.  For the ones I completely agree with, I fix them in my editor, and then go back to Review Board and mark them as Fixed.

The defect report has been marked as being fixed.

All fixed!

It’s also possible that I might not agree with one or more of the defect reports.  In this case, I’ll reply to the comment to argue my case.  I might also mark the defect report as Pass, which means, “I’ve seen it, but I think I’ll pass on that”.

The defect report has been marked as "pass".

I think I'll pass on that, thanks.

These comments and defect reports are also visible in the review request details page:

A defect report has been filed, and we're in the review request detail page.

A defect has been filed.

The defect is marked as fixed, and we're in the review request detail page.

All fixed up.

We're passing on the defect report, and we're in the review request detail page.

It's all good - just pass this defect report.

Thoughts?

What do you think?  Am I on the right track?  Am I missing a case?  Does “pass” make sense?  Will this be useful?  I’d love to hear your thoughts.

Review Board Statistics Extensions: Karma, Stopwatch, and FixIt

I just spent the long weekend in Ottawa and Québec City with my parents and my girlfriend Em.

During the long drive back to Toronto from Québec City, I had plenty of time to think about my GSoC project, and where I want to go with it once GSoC is done.

Here’s what I came up with.

Detach Reviewing Time from Statistics

I think it’s a safe assumption that my reviewing-time extension isn’t going to be the only one to generate useful statistical data.

So why not give extension developers an easy mechanism to display statistical data for their extension?

First, I’m going to extract the reviewing-time recording portion of the extension. Then, RB-Stats (or whatever I end up calling it), will introduce it’s own set of hooks for other extensions to register with.  This way, if users want some stats, there will be one place to go to get them.  And if an extension developer wants to make some statistics available, a lot of the hard work will already be done for them.

And if an extension has the capability of combining its data with another extensions data to create a new statistic, we’ll let RB-Stats manage all of that business.

Stopwatch

The reviewing-time feature of RB-Stats will become an extension on its own, and register its data with RB-Stats.  Once RB-Stats and Stopwatch are done, we should be feature equivalent with my demo.

Review Karma

I kind of breezed past this in my demo, but I’m interested in displaying “review karma”.  Review karma is the reviews/review-requests ratio.

But I’m not sure karma is the right word.  It suggests that a low ratio (many review requests, few reviews) is a bad thing.  I’m not so sure that’s true.

Still, I wonder what the impact will be to display review karma?  Not just in the RB-Stats statistics view, but next to user names?  Will there be an impact on review activity when we display this “reputation” value?

FixIt

This is a big one.

Most code review tools allow reviewers to register “defects”, “todos” or “problems” with the code up for review.  This makes it easier for reviewees to keep track of things to fix, and things that have already been taken care of.  It’s also useful in that it helps generate interesting statistics like defect density and defect detection rate (assuming Stopwatch is installed and enabled).

I’m going to tackle this extension as soon as RB-Stats, Stopwatch and Karma are done.  At this point, I’m quite confident that the current extension framework can more or less handle this.

Got any more ideas for me?  Or maybe an extension wish-list?  Let  me know.

Review Board Statistics Extension – Demo Time

If I’ve learned anything from my supervisor, it’s to demo. Demo often. Step out of the lab and introduce what you’ve been working on to the world. Hit the pavement and show, rather than tell.

So here’s a video of me demoing my statistics extension for Review Board.  It’s still in the early phases, but a lot of the groundwork has been taken care of.

And sorry for the video quality.  Desktop capture on Ubuntu turned out to be surprisingly difficult for my laptop, and that’s the best I could do.

So, without further ado, here’s my demo (click here if you can’t see it):

Not bad!  And I haven’t even reached the midterm of GSoC yet.  Still plenty of time to enhance, document, test, and polish.

If you have any questions or comments, I’d love to hear them.