Tag Archives: extensions

Day 2 at Mozilla Messaging: pbranch, testing, teleconferencing, intranet, and more testing

Ok, today was my second day at Mozilla Messaging.  Another good day.  Here are some highlights:

Today, I started off my day by wanting to learn a few things:

  1. How to use pbranch to locally commit my Firefox patch from yesterday
  2. How to write tests for my patch using Mochitest

I started with the first one.

So, for the most part, Mozilla uses Mercurial as its distributed version control system.  I’ve been using Git (arguably Mercurial’s main competitor) since last summer with both MarkUs and Review Board.  Mercurial is something quite different.  Quite different indeed.

pbranch is a tool that lets me have a patch queue.  Basically, organization, and re-organization of any changes I make to the Mozilla code-base is a lot easier using something like pbranch.

So I spent a few minutes going through the pbranch tutorial.  Eventually, I think I got the hang of it – basically, for my extension changes-sets, I create a new branch using hg pnew, and commit to that branch.  I’ll keep committing to that, and when I’m all done, I’ll dump my patch to a Bugzilla attachment.  After I pass code review, someone will merge my patch.  Then I’ll remove my local branch and pull in my changes.  Sweet!

Ok, so at this point, I think I got the workflow.  Next, I needed to figure out how to write a Mochitest.  Thankfully, there’s this tutorial.

Looking through the documentation, I was reminded of Selenium a little bit.  I think it’s sort of the same idea.

So how do I write a test for my changes?  Unfortunately, the documentation on how to write a Mochitest is a little thin.  So I started hunting around, looking for examples to extrapolate from.

At some point, I found myself staring at this code.  Wow!  A full-blown API for manipulating the add-ons manager!  Great!  But it turns out that this is for Mozmill tests, and not Mochitests.  But a quick search through MXR showed that nobody was using that add-on manager API.  Argh.

Was I barking up the wrong tree here?  Where the hell were the add-on manager tests?

I quickly swallowed my pride, and decided to talk to an expert.  I used Mercurial’s log function to determine who had changed extensions.js the most.  The name Dave Townsend came up.  According to his site, he’s “Mossop” on IRC, but he wasn’t online.  The log function also mentioned the name Blair McBride.  On IRC, he’s “Unfocused”.  He was online, but unavailable.

Argh.

It was at this point that Blake told me that the Mozilla Messaging weekly meeting was going to take place.  Apparently this happens every Tuesday at 9:30AM PST.  So we marched over to a conference room, hooked up this super-advanced phone (the phone had a boot-up screen, and then showed the Mozilla logo…whoa!).  A little while later, the meeting began.  The meeting was super fast, and super efficient – especially considering the teams are spread out across the globe.  One person led the meeting, and called the different teams up to give their weekly status.  I also got to introduce myself to the team.  I rambled off something about Thunderbird+Unity and code review, and then stumbled back to my chair.  Cool times.  Anyhow, teleconferencing is going to take some getting used to.

So, with the meeting over, and still no word from Unfocused, I decided to clean up my code a bit, and then posted my patch up on Bugzilla.  I asked Dave Townsend for a code review, and said that if testing needs to happen, hopefully he’d let me know and advise me.

It didn’t take long for a response to come back.  Apparently, there are indeed tests for the add-ons manager, and they’re right here in front of my face.  Crap, I should have known.  :/

So I dove into those tests…wow, there were a lot of them.  And I didn’t have a clue as to how to run any of them.

Following the Mochitest instructions, I eventually tried this:

TEST_PATH=toolkit/mozapps/extensions/test/ make -C $OBJDIR mochitest-plain

(where the TEST_PATH is set to the folder of tests I want to run, and $OBJDIR is an environment variable that points to the objdir compilation folder for Firefox.)

But this only ran a single test out of the bunch, and there were hundreds in there.  So what was the deal?

It turned out that the tests I wanted to run were with higher privileges than your average Mochitest test.  A basic Mochitest test is run using mochitest-plain.  Apparently, I needed to use mochitest-browser-chrome.  Took me a good half-hour to figure that one out.  :/

Anyhow, BAM, I had it – the tests were running.  The bad news:  I had a bunch of failing tests.  The good news:  the same tests failed without any of my changes.  So…great…I guess.

It was at this point in the day that I was given access to the Mozilla Messaging Intranet (the internal wiki).  There was plenty to read there, including something along the lines of “So you’re a new Mozilla Messaging hire”…I gave that a read.  Very interesting.

After that, I subscribed to a few internal mailing lists, and submitted my Mozilla-centric blog feed to be added to Planet Mozilla and Planet Mozilla Messaging.  Woop!

Finally, I got back to testing.  After digging through those add-ons manager tests, I finally found this:  PAYDIRT.

Sweet!  Tons of stuff for free in there:  MockProvider, createAddons… writing tests in there looked like it’d be cake.

But then it was home time.  More tomorrow.

Today, I want to learn a few things:

  1. How to use pbranch to locally commit my Firefox patch from last night
  2. How to write tests for my patch using Mochitest

Lets start with 1:

So Mozilla uses Mercurial as its distributed version control system.  I’ve been using Git since last summer with both MarkUs and Review Board.  Mercurial is something quite different.

Started going through pbranch tutorial.

So for my extension changes, here’s what I’m going to do.  I create a new branch using pnew, and commit to that.  I’ll keep committing to that.  When I’m all done, dump my patch to a Bugzilla attachment.  Someone will merge my patch.  Then I’ll remove my local branch and pull in my changes.  Sweet!

Ok, so I’ve got the workflow (I think).  Next, I need to figure out how to write a Mochitest.  Thankfully, there’s this:  https://developer.mozilla.org/en/Mochitest

I’m reminded of Selenium a little bit.  I think it’s sort of the same idea.

So how do I write a test for my changes?  Unfortunately, the documentation on how to write a Mochitest is a little thin.  So I guess I’ll be looking at examples, and extrapolating from there.  Let’s see if I can find a similar test written elsewhere.

This is promising:  http://mxr.mozilla.org/mozilla-central/source/testing/mozmill/tests/shared-modules/testAddonsAPI.js

Hm…but this is for Mozmill, and not Mochitest.

Yep, Mozilla uses a lot of different testing frameworks.  It’s a little confusing.  Mozmill is also like Selenium

So who is using testAddonsAPI.js?  Argh.  It looks like nobody.

I’m having a hard time finding tests for any of the stuff in extensions.js.  So I guess it’s time to talk to the expert.  I use hg log to see who the most frequent committer is to extensions.js.  The name Dave Townsend comes up.  http://www.oxymoronical.com/ .  He’s Mossop on IRC, and not online.  So who else is listed in hg log?  Blair McBride.

Had my first Mozilla Messaging weekly meeting.  It’s on a phone.  Interesting how its organized…really not like any phone conversation I’ve been a part of.  People mute themselves…unmute when its time to talk.  Awkward pauses are rampant…pretty cool though.  Coordinating around the world.  Nice!

Ok, back to Blair McBride…after a little hunting around, it turns out his IRC nickname is Unfocused.  I’ve found him in a few of the Mozilla IRC channels, and am waiting to hear back from him.

No word.  So, after some scrubbing, I posted my patch up on Bugzilla, and asked Dave Townsend for a review.  If testing needs to happen, hopefully he’ll let me know and advise me.

Whoop, just got a message.  The tests are here:  http://mxr.mozilla.org/mozilla-central/source/toolkit/mozapps/extensions/test/.  Crap, I should have known.  :/

Ok, lets examine those tests… hold up.  How do I run these?  Trying to run that directory with Mochitest, I only get 1 test to run…wtf?

Success!  TEST_PATH=toolkit/mozapps/extensions/test/ make -C . mochitest-browser-chrome

Sweet, so all those tests run.  Bad news though – a bunch of failing tests.  Going to see if it was my patch.  Ok, looks like a bunch of tests were failing before I even committed anything.  That’s good, I guess.

So, I have to compare them in order to ensure that there aren’t MORE failing tests after my patch goes in.

I now have access to the Intranet.  Sweet – lots of stuff to read here.  “So you’re a new Mozilla Messaging hire…”

Just subscribed to a few internal mailing lists, and submitted by Mozilla feed to be added to Planet Mozilla and Planet Mozilla Messaging.  Woop!

Ok, back to testing, I’ve found this:  http://mxr.mozilla.org/mozilla-central/source/toolkit/mozapps/extensions/test/browser/browser_sorting.js  This looks like paydirt.

The CEO of Mozilla Messaging (David Ascher) just welcomed me to MoMo:

mconley: welcome to the madhouse

Awesome. [WELCOME TO THE JUNGLE]

Sweet!  Tons of stuff for free:  MockProvider, createAddons – I think this’ll be cake tomorrow.

Ooops – Dave Townsend just asked a good question:  what about extensions that are to-be-installed?  to-be-uninstalled?  Where do they go?

I’ll have to check that out tomorrow.

But right now, it’s home time.

Filing Defects in Review Board

In my last post, I talked about an extension for Review Board that would allow users to register “defects”, “TODOs” or “problems” with code that’s up for review.

After chatting with the lead RB devs for a bit, we’ve decided to scrap the extension.

[audible gasp, booing, hissing]

Instead, we’re just going to put it in the core of Review Board.

[thundering applause]

Defects

Why is this useful?  I’ve got a few reasons for you:

  1. It’ll be easier for reviewees to keep track of things left to fix, and similarly, it’ll be harder for reviewees to accidentally skip over fixing a defect that a reviewer has found
  2. My statistics extension will be able to calculate useful things like defect detection rate, and defect density
  3. Maybe it’s just me, but checking things off as “fixed” or “completed” is really satisfying
  4. Who knows, down the line, I might code up an extension that lets you turn finding/closing defects into a game

However, since we’re adding this to the core of Review Board, we have to keep it simple.  One of Review Board’s biggest strengths is in its total lack of clutter.  No bells.  No whistles.  Just the things you need to get the job done.  Let the extensions bring the bells and whistles.

So that means creating a bare-bones defect-tracking mechanism and UI, and leaving it open for extension.  Because who knows, maybe there are some people out there who want to customize what kind of defects they’re filing.

I’ve come up with a design that I think is pretty simple and clean.  And it doesn’t rock the boat – if you’re not interested in filing defects, your Review Board experience stays the same.

Filing a Defect

I propose adding a simple checkbox to the comment dialog to indicate that this comment files a defect, like so:

Comment Defect Checkbox Screenshot

No bells. No whistles. Just a simple little checkbox.

While I’m in there, I’ll try to toss in some hooks so that extension developers can add more fields – for example, the classification or the priority of the defect.  By default, however, it’s just a bare-bones little checkbox.

So far, so good.  You’ve filed a defect.  Maybe this is how it’ll look like in the in-line comment viewer:

The inline comment viewer is showing that a defect report has been filed.

A defect has been reported!

Two Choices

A reviewer can file defects reports, and the reviewee is able to act on them.

Lets say I’m the reviewee.  I’ve just gotten a review, and I’ve got my editor / IDE with my patch waiting in the background.  I see a few defect reports have been filed.  For the ones I completely agree with, I fix them in my editor, and then go back to Review Board and mark them as Fixed.

The defect report has been marked as being fixed.

All fixed!

It’s also possible that I might not agree with one or more of the defect reports.  In this case, I’ll reply to the comment to argue my case.  I might also mark the defect report as Pass, which means, “I’ve seen it, but I think I’ll pass on that”.

The defect report has been marked as "pass".

I think I'll pass on that, thanks.

These comments and defect reports are also visible in the review request details page:

A defect report has been filed, and we're in the review request detail page.

A defect has been filed.

The defect is marked as fixed, and we're in the review request detail page.

All fixed up.

We're passing on the defect report, and we're in the review request detail page.

It's all good - just pass this defect report.

Thoughts?

What do you think?  Am I on the right track?  Am I missing a case?  Does “pass” make sense?  Will this be useful?  I’d love to hear your thoughts.

Review Board Statistics Extensions: Karma, Stopwatch, and FixIt

I just spent the long weekend in Ottawa and Québec City with my parents and my girlfriend Em.

During the long drive back to Toronto from Québec City, I had plenty of time to think about my GSoC project, and where I want to go with it once GSoC is done.

Here’s what I came up with.

Detach Reviewing Time from Statistics

I think it’s a safe assumption that my reviewing-time extension isn’t going to be the only one to generate useful statistical data.

So why not give extension developers an easy mechanism to display statistical data for their extension?

First, I’m going to extract the reviewing-time recording portion of the extension. Then, RB-Stats (or whatever I end up calling it), will introduce it’s own set of hooks for other extensions to register with.  This way, if users want some stats, there will be one place to go to get them.  And if an extension developer wants to make some statistics available, a lot of the hard work will already be done for them.

And if an extension has the capability of combining its data with another extensions data to create a new statistic, we’ll let RB-Stats manage all of that business.

Stopwatch

The reviewing-time feature of RB-Stats will become an extension on its own, and register its data with RB-Stats.  Once RB-Stats and Stopwatch are done, we should be feature equivalent with my demo.

Review Karma

I kind of breezed past this in my demo, but I’m interested in displaying “review karma”.  Review karma is the reviews/review-requests ratio.

But I’m not sure karma is the right word.  It suggests that a low ratio (many review requests, few reviews) is a bad thing.  I’m not so sure that’s true.

Still, I wonder what the impact will be to display review karma?  Not just in the RB-Stats statistics view, but next to user names?  Will there be an impact on review activity when we display this “reputation” value?

FixIt

This is a big one.

Most code review tools allow reviewers to register “defects”, “todos” or “problems” with the code up for review.  This makes it easier for reviewees to keep track of things to fix, and things that have already been taken care of.  It’s also useful in that it helps generate interesting statistics like defect density and defect detection rate (assuming Stopwatch is installed and enabled).

I’m going to tackle this extension as soon as RB-Stats, Stopwatch and Karma are done.  At this point, I’m quite confident that the current extension framework can more or less handle this.

Got any more ideas for me?  Or maybe an extension wish-list?  Let  me know.

Review Board Statistics Extension – Demo Time

If I’ve learned anything from my supervisor, it’s to demo. Demo often. Step out of the lab and introduce what you’ve been working on to the world. Hit the pavement and show, rather than tell.

So here’s a video of me demoing my statistics extension for Review Board.  It’s still in the early phases, but a lot of the groundwork has been taken care of.

And sorry for the video quality.  Desktop capture on Ubuntu turned out to be surprisingly difficult for my laptop, and that’s the best I could do.

So, without further ado, here’s my demo (click here if you can’t see it):

Not bad!  And I haven’t even reached the midterm of GSoC yet.  Still plenty of time to enhance, document, test, and polish.

If you have any questions or comments, I’d love to hear them.

Python Eggs: Sunny Side Up, and Other Goodies (or How I Learned to Stop Worrying and Start Coding)

Cooking with Eggs

Every now and then, the computer gods smile and give me a freebie.

I’ve been worrying my mind out over a few problems / obstacles for my Review Board extensions GSoC project.  In particular, I’ve been worrying about dealing with extension dependencies, conflicts, and installation.

I racked my brain.  I came up with scenarios.  I drew lots of big scary diagrams on a wipe board.

And then light dawned.

Batteries Come Included

Enter Setuptools and Python Eggs.

All of those things I was worried about having to build and account for?  When using Python Eggs, It’s all built in. Dependencies?  Taken care of. Conflicts?  Don’t worry about it.  Installation?  That’s what Setuptools and Python Eggs were built for!

In fact, it even looks like Setuptools was designed with extensible applications in mind.

Wait, really?  How?

Here’s the setup.py file for the rb-reports extension in the rb-extensions-pack on Github:

from setuptools import setup, find_packages

PACKAGE="RB-Reports"
VERSION="0.1"

setup(
    name=PACKAGE,
    version=VERSION,
    description="""Reports extension for Review Board""",
    author="Christian Hammond",
    packages=["rbreports"],
    entry_points={
        'reviewboard.extensions':
        '%s = rbreports.extension:ReportsExtension' % PACKAGE,
    },
    package_data={
        'rbreports': [
            'htdocs/css/*.css',
            'htdocs/js/*.js',
            'templates/rbreports/*.html',
            'templates/rbreports/*.txt',
        ],
    }
)

Pay particular attention to the “entry_points” parameter.  What this is doing, is registering rbreports.extension:ReportsExtension to the entry point “reviewboard.extensions”.

“Hold up!”, I hear you asking. “What’s an entry point?”

Entry Points

An entry point is a unique identifier associated with an application that can accept extensions.

The unique identifier for Review Board extensions is “reviewboard.extensions”.

This is the first handshake, more or less, between Review Board and any extensions:  in order for Review Board to “see” the extension, the extension must register an entry point at “reviewboard.extensions”.

This blog post shows how extensions can be found and loaded up.

Other Goodies

INSTALLED_APPS and Django

I remember also being worried about how to create tables in Django for extension models.  I thought “holy smokes, I’m going to have to either shoehorn some raw SQL into the extension manager, or maybe even trust the extension developers to write the CREATE TABLE queries themselves!”.

Luckily, there’s a better alternative.

Django knows about its applications through a dictionary called INSTALLED_APPS. When you add a new model to a Django project, you simply add the model app to the INSTALLED_APPS dictionary, and run “manage.py syncdb”.  Django does the magic, bingo-bango, and boom – tables created.

So if a new extension has some tables it needs created, I simply insert the app name of the extension into INSTALLED_APPS when the extension is installed, and call syncdb programmatically.  Tables created:  no sweat.

django-evolution

Creating tables is easy.  But what if an extension gets updated, and the table needs to be modified?  Sounds like we’ve got a mess on our hands.

And don’t expect Django to save you.  When you modify a model in Django, they expect you to into that DB and alter that table by hand:

[syncdb] creates the tables if they don’t yet exist. Note that syncdb does not sync changes in models or deletions of models; if you make a change to a model or delete a model, and you want to update the database, syncdb will not handle that.
From The Django BookChapter 5: Models

Thankfully, there’s a mechanism that’s already built into Review Board that makes this trouble go away:  django-evolution.  Django-evolution, when used properly, will automatically detect changes in application models, and alter the database tables accordingly.  This is how Review Board does upgrades.

And to top that off, RB co-founder Christian Hammond just became the django-evolution maintainer.

Wow.  Everything is falling neatly into place.