Category Archives: GSoC

Look down. Now back up again. The Defects are now Issues.

From my software development experience, there are a few different words for the generic notion of a “problem”.

A bug or a defect, for example, is defined as the following from Wikipedia:

A software bug is the common term used to describe an error flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program’s source code or its design, and a few are caused by compilers producing incorrect code.

An issue goes a level higher – a bug is an issue, but an issue might not be a bug.  Wikipedia says:

In computing, the term issue is a unit of work to accomplish an improvement in a system. An issue could be a bug, a requested feature, task, missing documentation, and so forth. The word “issue” is popularly misused in lieu of “problem.” This usage is probably related.

Where am I going with all of this?

Well, remember when I said I was going to add defect reporting/tracking capabilities to Review Board?  I asked for some feedback on my UI mockups on the developer mailing list, and an interesting conversation on terminology erupted.

Anyhow, the long and the short of it is – we’re going to be calling “problems still existing within a review request revision” issues.  And this is distinct from the sort of thing that might show up in an issue tracker.

Maybe down the line, we’ll have a way for administrators to set their own word for it.  From the thread, it sounds like everybody and their brother has their own favourite terminology.  “Issue” will have to do for now.

Thanks again to everyone on the list who contributed to the conversation.

Review Board Statistics Extensions: Karma, Stopwatch, and FixIt

I just spent the long weekend in Ottawa and Québec City with my parents and my girlfriend Em.

During the long drive back to Toronto from Québec City, I had plenty of time to think about my GSoC project, and where I want to go with it once GSoC is done.

Here’s what I came up with.

Detach Reviewing Time from Statistics

I think it’s a safe assumption that my reviewing-time extension isn’t going to be the only one to generate useful statistical data.

So why not give extension developers an easy mechanism to display statistical data for their extension?

First, I’m going to extract the reviewing-time recording portion of the extension. Then, RB-Stats (or whatever I end up calling it), will introduce it’s own set of hooks for other extensions to register with.  This way, if users want some stats, there will be one place to go to get them.  And if an extension developer wants to make some statistics available, a lot of the hard work will already be done for them.

And if an extension has the capability of combining its data with another extensions data to create a new statistic, we’ll let RB-Stats manage all of that business.

Stopwatch

The reviewing-time feature of RB-Stats will become an extension on its own, and register its data with RB-Stats.  Once RB-Stats and Stopwatch are done, we should be feature equivalent with my demo.

Review Karma

I kind of breezed past this in my demo, but I’m interested in displaying “review karma”.  Review karma is the reviews/review-requests ratio.

But I’m not sure karma is the right word.  It suggests that a low ratio (many review requests, few reviews) is a bad thing.  I’m not so sure that’s true.

Still, I wonder what the impact will be to display review karma?  Not just in the RB-Stats statistics view, but next to user names?  Will there be an impact on review activity when we display this “reputation” value?

FixIt

This is a big one.

Most code review tools allow reviewers to register “defects”, “todos” or “problems” with the code up for review.  This makes it easier for reviewees to keep track of things to fix, and things that have already been taken care of.  It’s also useful in that it helps generate interesting statistics like defect density and defect detection rate (assuming Stopwatch is installed and enabled).

I’m going to tackle this extension as soon as RB-Stats, Stopwatch and Karma are done.  At this point, I’m quite confident that the current extension framework can more or less handle this.

Got any more ideas for me?  Or maybe an extension wish-list?  Let  me know.

Review Board Statistics Extension – Demo Time

If I’ve learned anything from my supervisor, it’s to demo. Demo often. Step out of the lab and introduce what you’ve been working on to the world. Hit the pavement and show, rather than tell.

So here’s a video of me demoing my statistics extension for Review Board.  It’s still in the early phases, but a lot of the groundwork has been taken care of.

And sorry for the video quality.  Desktop capture on Ubuntu turned out to be surprisingly difficult for my laptop, and that’s the best I could do.

So, without further ado, here’s my demo (click here if you can’t see it):

Not bad!  And I haven’t even reached the midterm of GSoC yet.  Still plenty of time to enhance, document, test, and polish.

If you have any questions or comments, I’d love to hear them.

GSoC Update: My Review Board Statistics Extension

The Primary Goal

From the very beginning, my GSoC project has been mainly focused towards one primary goal:  I want to build an extension for Review Board that will allow me to collect information about how long reviewers actually spend reviewing code.

That’s easier said than done.  When I started, the Review Board extension framework wasn’t really in a state to allow such an extension to exist.

So I’ve been tooling around in the Review Board code for the past 2 months, preparing the framework, and getting it ready to handle my extension.

And last night, it started to work.  I can now give rough estimates on how long a reviewer has spent reviewing code.

How It Works

My extension adds a new table to the database which stores “reviewing sessions”.  Each reviewing session is associated to a particular review request and user, and also has a field to store the number of seconds that a user has spent in review.

I’ve created a TemplateHook that allows me to inject Javascript into key areas of Review Board (in particular, the diff viewer, and the screenshot viewer).  The Javascript does the following:  every 10 seconds, we check to see if the mouse has moved on the body of the HTML document.  If it has, we send an “activity” notification to the server.

The server receives this activity notification through the Web API, and checks to see if the time lapsed since the last session update was greater than 10 seconds.  If it is, we increment the working session by 10 seconds and return a 200 HTTP code.  If it isn’t, we don’t change anything and return a 304 HTTP code.

Next, my extension waits for a user to publish a review.  When it notices that a review is being published, it finds the working session for that user and review request, and then attaches it to the published review.  If the user then starts looking at the diff or screenshots again, a new working session is created.

The result?  A pretty decent estimate of how long a user has spent reviewing the code.  No time gets recorded if the user gets up and has a sandwich.  No time gets recorded if the user is on another tab reading Reddit.

An image showing how reviewing time is displayed to the user

Not bad.  For a first draft, anyhow.

I think I’m going to try to chart the data somehow, so that users can track their inspection rates.  I’ll let you know how that goes.

Review Board Tests: SCMTool Segmentation Fault

I can’t honestly say I’ve been doing much test-driven development on Review Board.  Django is a really cool web-framework, but I miss all of the nice testing tools from the Rails ecosystem.

Anyhow, today, I decided to run the tests, and BAM – Segmentation Fault:

Testing Perforce binary diff parsing … SKIP: perforce/p4python is not installed
Testing PerforceTool.get_changeset … SKIP: perforce/p4python is not installed
Testing Perforce empty and normal diff parsing … SKIP: perforce/p4python is not installed
Testing Perforce empty diff parsing … SKIP: perforce/p4python is not installed
Testing PerforceTool.get_file … SKIP: perforce/p4python is not installed
Testing parsing SVN diff with binary file … ok
Testing SVNTool.get_file … Segmentation Fault

Yikes.  Getting a segfault in Python is unusual, and a little jarring.  However, it did let me narrow down the problem a bit:  it must have to do with the pysvn bindings that Review Board uses to talk to Subversion, because those are written in C++ (which can certainly segfault).

The machine I was using had Ubuntu Hardy on it, and the Hardy packages have pysvn 1.5.2-1.  Taking a look at the pysvn homepage, the most recent version is actually 1.7.2.

So, I uninstalled pysvn, downloaded the source for 1.7.2, compiled it, and installed it manually.

Segfault gone.  Tests pass.  Awesome sauce.