Tag Archives: MarkUs

My GSoC Project: Review Board Extensions

If you didn’t already know, Review Board is an open-source web-based code review tool.  The MarkUs Team has been using Review Board for pre-commit code review for about a year now.  This has given the team a number of advantages:

  1. For a team that usually has a 4 month turnover, this allows us to quickly get new team members up to speed with how to contribute to MarkUs.  We review every change that they propose, and give them tips/guidance on how to make it fit in well with the application.  They learn, and the applications code stays healthy.
  2. We catch defects before they enter the code base.  Simple as that.
  3. We get a good sense of what other people are working on, and what is going on in the code.  Review Board has become a central conversation and learning hub for the developers on the MarkUs team.

So, the long and the short of it:  I like Review Board.  Review Board helps us write better code.  I want to make Review Board better.

So what am I proposing?

How to Avoid A Bloated Software Monster

You can never make some people happy.

No matter how decent your software is, someone will eventually come up to you and say:

Wow!  Your software would be perfect if only it had feature XYZ!  Sadly, because you don’t have feature XYZ, I can’t use it.  Please implement, k thx!

And so you either have to politely say “no”, and lose that user, or say “yes”, and add feature XYZ to the application.  And for users out there who don’t need, or don’t care about feature XYZ, that new feature just becomes a distraction and adds no value.  Make this happen a bunch of times, and you’ve got yourself a bloated mutha for a piece of software.

And we don’t want a bloated piece of software.  But we do want to make our users happy, and provide feature XYZ for them if they want it.

So what’s the solution?  We provide an extension framework (which is also sometimes called a plug-in architecture).

An extension framework allows developers to easily expand a piece of software to do new things.  So, if a user wants feature XYZ, we (or someone else) just creates and make available an extension that implements the feature.  The user installs the extension, activates it, and bam – our user is happy as a clam with their new feature.

And if we make it super-easy to develop them, third-party developers can write new, wonderful, interesting extensions to do things that…well, we wouldn’t have considered in the first place. It’s a new place for innovation.  What’s that old cliché?

If you build it [the plug-in framework], they will come [the third-party developers who write awesome things]

And the developers do come.  Just look at Firefox add-ons or WordPress plugins.  Entire ecosystems of extensions, doing things that the original developers would probably have never dreamed of doing on their own.  Hell, I’ve even written a Firefox add-on. And users love customizing their Firefox / WordPress with those extensions.  It adds value.

So we get wins all over the place:

  • Our user gets their feature
  • The software gets more attractive because it’s flexible and customizable
  • The original software developers get to focus on the core piece of software, and let the third-party developers focus on the fringe features

And this is where I think I can help Review Board.

(Before I go on, if you’re interested, here’s another article on the how and the why of plug-in architectures)

Review Board Extensions

So if you look at the Review Board Wiki, or glance at the mailing lists you see numerous requests from users for new features, for example:

It would be nice if the review board had a “next comment” button that is always available to click, or had a collapse/expand button. This would make it easier to see other people’s comments in cases like this.

It will be nice to have post-commit support. Instead of every post-commit review being a separate URL, if we could setup default rules for post-commit reviews to update an existing review providing the diff-between-diff features, it would be very useful.

The Review Board developers could smell the threat of bloated feature-creep from a mile away.  So, in a separate branch, they began working on integrating an extension framework into Review Board.

The extension branch, however, has been gathering dust, while the developers focus on more critical patches and releases.

My GSoC proposal is to finish off a draft of the extension framework, document it, and build a very simple extension for it.  My simple extension will allow me to record basic statistics about Review Board reviewers – for example, how long they spend on a particular review, their inspection rate, etc.

Having been a project lead MarkUs for so long, it’s going to be a good experience to be back on “the bottom” – to be the new developer who doesn’t entirely have a sense of the application code yet.  It’s going to be good to go code spelunking again.  I’ve done some preliminary explorations, and it’s reminding me of my first experiences with MarkUs.  Like a submarine using its sonar, I’m slowly getting a sense of the code terrain.

I’ll let you know what my first few sweeps find.

Ping!

I’ve done it again:  I’ve let dust gather on my blog.

Quick update:

  • I’ve finished my courses for this semester, and have gone into full-blown research mode.
  • My research proposal is going through ethics review, in order to make sure that I’m not going to blow things up (or hurt anybody if I do)
  • While my paperwork is reviewed, I’m refining my procedure and apparatus.  Better and better.
  • I’ve been accepted into Google Summer of Code this year – I’ll be working on Review Board.  Details about my project will be the subject of an upcoming post, which I will toss up shortly.
  • I may or may not be co-directing a radio play.  I’ll let you know.
  • The MarkUs team is about to release version 0.7, and a fresh batch of Summer students will soon be here at UofT to work on it!
  • I have not forgotten about the UCDP trip to Poland.  I still have to tell you what we saw and did at Auschwitz.  Cripes – it’s almost a year since I returned, and I’m only half-way through the whole story.  And there’s a ton more to tell.  Coming soon.

Stay tuned.

Author Preparation in Code Review: What Are Those Authors Saying?

If you recall, I’m looking at author preparation in code review, and whether or not it impairs the ability of reviewers to perform objective reviews effectively.

If this is really going to be my research project, I’ll need to get my feet a bit more wet before I design my experiment.  It’s all well and good to say that I’m studying author preparation…but I need to actually get a handle on what authors tend to say when they prepare their review requests.

So how am I going to find out the kinds of things that authors write during author preparation?  The MarkUs Project and the Basie Project both use ReviewBoard, so it’ll be no problem to grab some review requests from there.  But that’s a lot of digging if I do it by hand.

So I won’t do it by hand.  I’ll write a script.

You see, I’ve become pretty good at manipulating the ReviewBoard API.  So mining the MarkUs and Basie ReviewBoard’s should be a cinch.

But I’d like to go a little further. I want more data.  I want data from some projects outside of UofT.

Luckily, ReviewBoard has been kind enough to list several open source projects that are also using their software.  And some of those projects have their ReviewBoard instances open to the public.  So I just programmed my little script to visit those ReviewBoard instances, and return all of the review requests where the author of the request was the first person to make a review.  Easy.

Besides MarkUs and Basie, I chose to visit the AsteriskKDE, and MusicBrainz projects.

Asterisk was a crapshoot – of all of their review requests, not a single one returned a positive.

But I got a few blips on the others. Not many, but a few.

I read all of the author preparation for each blip, and broke down what I read into some generalizations.

So, now to the meat:  here are some generalizations of what the authors tended to say, in no particular order.  I’ve also included a few examples so you can check them out for yourselves.

“Here’s why I did this”

The author makes it explicit why a change was made in a particular way.

Examples:

“Here’s what this part does…”

The author goes into detail about what a portion of their diff actually does.

Examples:

“Can I get some advice on…”

The author isn’t entirely sure of something, and wants input from their peers.

Examples:

“Whoops, I made a mistake / inserted a bug.  I’ll update the diff.”

The author has found a mistake in their code, and either indicates that they’ll update the diff in the review request, or change the code before it is committed.

“Whoops – that stuff isn’t supposed to be there.  Ignore.”

The author has accidentally inserted some code into the diff that they shouldn’t have.  They give their assurances that it’ll be removed before committing – reviewers are asked to ignore.

Examples:

“Before you apply this patch, you should probably…”

The author believes that the reviewers will need to do something special, or out of the ordinary, in order to apply the diff.

“…hello?”

The review request has been idle for a while without a single review.  The author pings everybody for some attention.

Examples:

Anyhow, those are the general patterns that stand out.  I’ll post more if I find any.

Have you seen any other common patterns in author preparation?  What would you say, if you were preparing your code for someone else to review?  I’d love to hear any input.

PS:  If anyone is interested in getting the full list of author prepared review requests for these 4 projects, let me know, and I’ll toss up all the links.

Take Those Code Review Requests for a TestDrive…

Remember how I wrote a while back that I wanted to write a script to let me do some quick and easy pre-commit continuous integration with the MarkUs project?

Well, I think I just wrote one.

Introducing TestDrive…

TestDrive will fetch a review request, grab the latest diff (yes, found an easy way past the lack of API there), check out a fresh copy of MarkUs, throw down the diff, set it up with some Sqlite3 databases, run your tests, and voila – go to localhost:3000, and you’re running the review request diff.

I’ve been using it myself for about a week or so, and so far, it’s helped me catch a number of bugs that I wouldn’t have caught just by looking at the code in ReviewBoard.  Nice.

Click here to check out TestDrive.

Screencasting Code Reviews is Hard

I’ve been trying to record myself performing code reviews for The MarkUs Project.

It’s a lot harder than I thought it’d be.  The screencasts are really only useful if I’m saying what I’m thinking, and I’m finding it difficult to maintain stream of consciousness and perform an effective/thorough review.  The last few times I’ve tried it, I find myself blurting an expletive, stopping the recording in frustration, and then starting the review over so that I can do a good, proper job.

I think this is going to take more practice.