Category Archives: Technology

ReviewBoard Extensions: A State of Affairs

One of my potential research experiments involves augmenting ReviewBoard, and so, I’ve been studying the ReviewBoard code.

I want to give ReviewBoard the ability to record certain statistics – for example, the number of defects in a review request, number of defects found per reviewer, defect density, inspection rate, etc…

It turns out I’m not the only one who wants this to happen.  Check out this thread.

So it turns out that the devs building ReviewBoard are planning on integrating an extension framework.  Theoretically, users could then write their own extensions to customize ReviewBoard as they see fit.  Think WordPress plugins, but for a code review tool.  Neat.

According to the above link:

A lot of this [extension framework] already exists in a private development branch, and it will be one of our primary focuses as soon as 1.0 goes out.

However, I found the following quote from a relatively recent mailing list message:

[with regards to the extensions framework]…We have to finish building it, testing
it, getting other features we have planned into the release, and performing
the release. So, we’re looking at a year or more. But I don’t see how we’d
get the functionality you want short-term.

Hrmph.  My thinking:  maybe I can help them with it, and we can crank this puppy out sooner.

So where is this private branch?  It actually took a little bit of detective work…

Detective Work

I started by looking at the ReviewBoard repo on Github, which I assumed that the extensions branch would fork from.

I was right – check out the fork network.  There’s a branch helpfully labeled “extensions” forked by davidt, one of the ReviewBoard devs.

Two observations:

  1. This fork hasn’t been updated in about a month
  2. The extensions stuff references stuff from Djblets which doesn’t exist in the Djblets master branch

So I perform the same exercise – I glance at the Djblets fork network, and see the handily named “extensions” fork that is in davidt’s git repo.  Nice.

Ok, so those are the forks/branches that I’m going to grab.

A Fluke

So, funny story:  I was able to get this “extension” version of ReviewBoard working on my work machine.  But it was by a complete fluke – this extension version of Djblets is actually missing some pieces.  The only reason it worked for me was because I accidentally ran with the master checkout of Djblets first.  This errored out, and also generated a bunch of compiled Python.  I then switched to the extensions branch, which left the compiled Python behind.  The compiled Python filled all of the missing pieces that are in the extensions branch of Djblets, and so it ran.

For a guy who is pretty new to both Git and Django, I found that problem pretty frustrating to track down.  Especially when I started writing up the checkout instructions… big thanks to Zuzel for her invaluable (and patient) Django guidance.

Anyhow, if you’re interested in the steps to failure, here they are…

Setting Up (for failure)

I’m running Ubuntu Karmic.  I have Git installed.

  1. First of all, this document is invaluable
  2. Check out django_evolution from SVN to some local directory
  3. Put path to django-evolution into PYTHONPATH (export PYTHONPATH=$PYTHONPATH:/…/django-evolution-read-only/)
  4. Get Python Setuptools (sudo apt-get install python-setuptools)
  5. With easy_install, get nose, paramiko, recaptcha-client, Sphinx (sudo easy_install nose paramiko recaptcha-client sphinx)
  6. Clone original master Djblets repo (git clone git://github.com/djblets/djblets.git) and CD into djblets directory
  7. Add davidt’s remote Djblets branch (git remote add davidt git://github.com/davidt/djblets.git)
  8. Fetch davidt’s repo  (git fetch davidt)
  9. Track davidt’s extension branch for djblets (git branch extensions davidt/extensions)
  10. In djblets root dir, type:  sudo python setup.py develop
  11. This should install Django too.  The ReviewBoard install doc says to install from Subversion, but this will do for now.
  12. Clone ReviewBoard git repo (git clone git://github.com/reviewboard/reviewboard.git) and Cd into it
  13. Add davidt’s remote Reviewboard branch (git remote add davidt git://github.com/davidt/reviewboard.git)
  14. Fetch davidt’s repo  (git fetch davidt)
  15. Track davidt’s extensions branch of reviewboard (git branch extensions davidt/extensions)
  16. Switch to extensions branch (git checkout extensions)
  17. So reviewboard/reviewboard/extensions should have stuff in it.
  18. python ./contrib/internal/prepare-dev.py
  19. Manually create the directory reviewboard/reviewboard/htdocs/media/ext.  ReviewBoard expects it and will complain if it isn’t there.
  20. Run ./contrib/internal/devserver.sh
  21. Go to localhost:8080

If you’re like me, you’ll see an error about “no module named urls”.  This is because the extension branch of Djblets is missing urls.py in its djblets/log directory.  As you can see, urls.py exists in the master Djblets branch.

So I’ve accidentally got it running.  But this version of ReviewBoard I’ve created is like Frankenstein’s monster – it’s a sad, stitched together monstrosity that should probably never see the light of day.

I tried to merge the master branch of Djblets and ReviewBoard into the extensions branch, and then realized that I had no idea what I was doing.  The last guy you want doing a merge is a guy who doesn’t know what he’s doing.

So that’s where I stand with it.  Kind of a sad affair, really.  I’ve posted on the ReviewBoard developer mailing list asking for some guidance, but all is quiet so far.

In the meantime, I’ll continue studying the code when I can.  The extensions stuff is actually pretty far along, in my opinion – but what do I know, I’ve never built one.  I’ll also take a peek at some other examples of extension frameworks to see what I can learn about implementation (that’s right – I’m talking about you, WordPress).

I’ll let you know what I find.

Turning Peer Code Review into a Game

A little while back, I wrote about an idea that a few of us had been bouncing around:  peer code review achievements.

It started out as a bit of Twitter fun – but now it has evolved, and actually become a contender for my Masters research.

So I’ve been reading up on reputation and achievement systems, and it’s been keeping me up at night.  I’ve been tossing and turning, trying to figure out a way of applying these concepts to something like ReviewBoard.  Is there a model that will encourage users to post review requests early and often?  Is there a model that will encourage more thorough reviews from other developers?

An idea eventually sprung to mind…

Idea 1:  2 Week Games

In a speech he gave a few years ago, Danc of The Last Garden placed the following bet:

  • If an activity can be learned…
  • If the player’s performance can be measured…
  • If the player can be rewarded or punished in a timely fashion…
  • Then any activity that meets these criteria can be turned into a game.

Let’s work off of this premise.

Modeled on the idea of a sprint or iteration, let’s say that ReviewBoard has “games” that last 2 weeks.

In a game, users score points in the following way:

  • Posting a review request that eventually gets committed gives the author 1 point
  • A review request that is given a ship-it, without a single defect found, gives the author The 1.5 Multiplier on their total points.  The 1.5 Multiplier can be stolen by another player if they post a review request that also gets a ship-it without any defects being found.
  • Any user can find/file defects on a review request
  • A defect must be “confirmed” by the author, or “withdrawn” by the defect-finder.
  • After a diff has been updated, “confirmed” defects can be “fixed”.  Each fixed defect gives the defect-finder and author 1 point each.

After two weeks, a final tally is made, achievements / badges are doled out, and the scores are reset.  A new game begins.  Users can view their point history and track their performance over time.

Granted, this game is open to cheating.  But so is Monopoly.  I can reach into the Monopoly bank and grab $500 without anybody noticing.  It’s up to me not to do that, because it invalidates the game.  In this case, cheating would only result in bad morale and a poorer piece of software.  And since scores are reset every two weeks, what’s the real incentive to cheat?

Idea 2:  Track My Performance

I’ve never built a reputation system before – but Randy Farmer and Bryce Glass have.  They’ve even written a book about it.

Just browsing through their site, I’m finding quotes that suggests that there are some potential problems with my two week game idea.  In particular, I have not considered the potentially harmful effects of displaying “points” publicly on a leader-board.

According to Farmer / Glass:

It’s still too early to speak in absolutes about the design of social-media sites, but one fact is becoming abundantly clear: ranking the members of your community-and pitting them one-against-the-other in a competitive fashion-is typically a bad idea. Like the fabled djinni of yore, leaderboards on your site promise riches (comparisons! incentives! user engagement!!) but often lead to undesired consequences.

They go into more detail here.

Ok, so let’s say that they’re right.  Then how about instead of pitting the reviewers against one another, I have the reviewers compete against themselves?

Ever played Wii Sports?  It tracks player performance on various games and displays it on a chart.  It’s really easy to see / track progress over time.  It’s also an incentive to keep performance up – because nobody wants to go below the “Pro” line.

So how about we just show users a report of their performance over fixed time intervals…with fancy jQuery charts, etc?

So what?

Are either of these ideas useful?  Would they increase the number of defects found per review request?  Would they increase the frequency and speed of reviews?  Would they improve user perception of peer code review?  Would it be ignored?  Or could it harm a team of developers?  What are the benefits and drawbacks?

If anything, it’d give ReviewBoard some ability to record metrics, which is handy when you want to show the big boss how much money you’re saving with code review.

Might be worth looking into.  Thoughts?

Research Proposal #1: The Effects of Author Preparation in Peer Code Review

The Problem Space

Click here to read about my problem space

Related Work

During his study at Cisco Systems, Jason Cohen noticed that review requests with some form of author preparation consistently had fewer defects found in them.

Jason Cohen explains what author preparation is…

The idea of “author preparation” is that authors should annotate their source code before the review begins.  Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification.  The theory is that because the author has to re-think all the changes during the annotation process, the author will himself uncover most of the defects before the review even begins, thus making the review itself more efficient.  Reviewers will uncover problems the author truly would not have thought of otherwise.

(Best Kept Secrets of Peer Code Review, p80-81)

Cohen gives two theories to account for the drop in defects:

  1. By performing author preparation, authors were effectively self-reviewing, and removed defects that would normally be found by others.
  2. Since authors were actively explaining, or defending their code, this sabotaged the reviewers ability to do their job objectively and effectively.  There is a “blinding effect”.

In his study, Cohen subscribes to the first theory.  He writes:

A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer.  Often the reviewer will respond to or ask a question or open a conversation on another line of code, demonstrating that he was not dulled by the author’s annotations.

While it’s certainly possible that Cohen is correct, the evidence to support his claim is tenuous at best, as it suffers from selection bias, and has not been drawn from a properly controlled experiment.

What do I want to do?

I want to design a proper, controlled experiment in an attempt to figure out why exactly the number of found defects drop when authors prepare their review requests.

My experiment is still being designed, but at its simplest:

We devise a review request with several types of bugs intentionally inserted.  We create “author preparation” commentary to go along with the review request.  We show the review request to a series of developers – giving some the author preparation, and some without – and ask the developers to perform a review.

We then take measurement on the number/type/density of the defects that they find.

Why do you care?

If it is shown that author preparation does not negatively affect the number of defects that the reviewers find, this is conclusive evidence to support Cohen’s claim that author preparation is good.  This practice can then be adopted/argued for in order to increase the effectiveness of code reviews.

On the other hand, if it is shown that author preparation negatively affects the number of defects that the reviewers find, this has some interesting consequences.

The obvious one is the conclusion that authors should not prepare their review requests, so as to maximize the number of defects that their reviewers find.

The less obvious one takes the experimental result a step further. Why should this “blinding effect” stop at author preparation?  Perhaps a review by any participant will negatively affect the number of defects found by subsequent reviews?  The experiment will be designed to investigate this possibility as well.

Either way, the benefits or drawbacks of author preparation will hopefully be revealed, to the betterment of the code review process.

Research Proposal: My Problem Space

I want to talk about peer code review.

The code inspection process was formally brought to light by Michael Fagan in the 1970’s, when he showed that code inspection improves the quality of source code. Code inspection, coupled with rigorous testing / QA, helps to reduce the number of defects in a piece of software before it is releasedwhich is really the cheapest time to find and fix those defects.

Jason Cohen took Fagan’s inspection technique out of the conference room, and helped to bring it online.  After a study at Cisco Systems, he found (among other things) that light-weight code reviews were just as (or more) effective as Fagan inspections, and took less time.

There are a myriad of light-weight peer code review tools available now.  Code review has become more of a common software development practice.*

That’s really great.  But how can we make it better? Here are some research project proposals…

*For more information on code review, I’ve written ad nauseum about it…

Peer Code Review Achievements

First off, check this out:  Unit Testing Achievements for Python

Cool:  rewards for testing (or, in some cases, breaking the build…see the “Complete Failure” achievement).

And so my brain instantly goes to:  why can’t this be applied to peer code review?

@gvwilson, @wolever and @bwinton and I have been tossing the idea around on Twitter.  Here’s a list of the achievements that we have come up with so far:

  • “The Congress Award”:  review request with the longest discussion thread
  • “The Two Cents Award”:  most review participants
  • “The One-Liner Award”:  diff has only a single line
  • “The Difference-Maker Award”:  biggest diff
  • “The Awed Silence Award”:  oldest review request without a single review
  • “The I-Totally-Rock Award”:  author gives themselves a ship-it
  • “The New Hire Award”:  user has submitted 5 patches, but hasn’t reviewed any code yet
  • “The Manager Award”: reviewed patches to submitted patches ratio > 5
  • “The Code Ninja Award”:  biggest diff to get a ship-it and commit 1st time through
  • “The Persistence Award”:  diff which gets >5 review before being accepted
  • “The Bulwer-Lytton Award”:  for the most long-winded review
  • “The Ten-Foot-Pole Award”:  for the scariest review

Can you think of any more?

And while it’s fun to think of these, what would the effects of achievements be on code review adoption?  Sounds like an interesting thesis topic.

Or a GSoC project.