Monthly Archives: March 2010

Does Peer Grading Make Students Better Programmers?

The Question

My past few blog posts have been concerned with the usefulness of peer grading.  Steve Joordens showed that peer grading was pedagogically useful for first-year psych students…but what about computer science students?  Would they learn from it?  Would they become better programmers?

We don’t know.

Maybe it’s time to find out.

The Experiment

It’s pretty simple, actually.

I have two groups of students.  Let’s call them groups A and B.

For each student in A, have them complete a simple programming assignment (call it P1).  Once they’re finished, have them complete a second simple programming assignment (call it P2).

For each student in B, have them complete P1.  Once that’s done, have them view 5 or 6 different mocked up submissions, also for P1.  For each submission, have the students fill out a rubric and assign a grade. Once finished, the students then complete P2.

Then, I get some fellow graduate students to mark my mocked up submissions, the group A P1/P2 submissions, and the group B P1/P2 submissions.

If grading made the students better programmers, we should see an increase in the number of marks given to the students in group B for P2.

Bonuses, and Other Concerns

This experiment is nice and simple. And, besides showing if peer grading makes students better programmers, it gives us a couple of bonuses:

  • It tells us if graduate students tend to agree on what marks to give to submissions.  If they don’t agree, and the marks wildly differ…we might have a problem
  • It tells us if some number of students can, on average, approximate the grade a TA would give on a submission
  • It can tell us the average inspection rate for both students and TAs

I’ll have to do randomization here and there to eliminate ordering effects – for example, randomizing the criteria on the rubric, randomizing which assignments go first and second, randomizing the order in which the mock-up submissions are shown, etc.

One thing to consider though:  what effect does simply seeing the rubric have on students?

I’ve been in courses where I’ve not been allowed to see the marking rubric for some assignment.  It’s frustrating.  Seeing the rubric helps me focus on the areas that I’ll be marked on.

So what if just seeing the rubric makes the students “better programmers”?  One way to counteract this would be to have the rubric for the second assignment be quite a bit different than the one for the first assignment.

Statistics

Oh yeah.  Stats.  Not my strongest subject.  I’m going to have to brush up on this (and probably enlist some help within the department) if I’m going to do this properly.  I’m probably not going to get as many participants as I think I will…so I have to accommodate small N.  Hrm.

Anyhow, this is where my summer experiment seems to heading.  What do you think?  I’m all ears.

Teaching Peer Code Review By Consensus

I was really happy to see all of the response to my last blog post. Lots of great ideas and suggestions.  Thanks all!

One of the problems that was brought up with my original idea for teaching code review was that it punishes students twice if they didn’t understand a programming concept.

For example, if a student does not understand what pipes are for and how they work, they’re probably going to do pretty poorly on their pipes assignment in a systems intro course.  So there’s one slam for the student.

The second time is when they review their peer’s code.  If they still don’t understand how pipes work, their reviews are going to be pretty trashy.  And they’ll get a poor mark for that.  And that’s the second slam.

The problem here is that the students don’t get any feedback before they go into the peer review process.  For the “weaker” students, this essentially means bringing a knife to a tank fight.

So here’s an idea:

  1. After an assignment due date passes, and the students have submitted their code, the students are randomly placed into groups of 3 or 4.
  2. Each group is assigned a single random submission from the ones collected from the students
  3. Each student in their group individually, and privately, performs a review on their assigned submission.  They fill out a rubric, make comments, etc.  They are not allowed to interact with the other members of their group.
  4. After the students have finished their review, they can converse with their other group members.  The group must produce another review – but this one is by consensus.  They must work together to find the most appropriate mark.
  5. Finally, after the consensus reviews are in, the groups are disbanded.  Students are then shown their own code submissions.  They must do a final review on their own code by filling in the marking rubric.
  6. Student’s marks will be based on:
    1. The mark that the TA gave them
    2. How closely their individual review of the group submission agrees with the TAs assessment
    3. How close the consensus review of the group submission agrees with the TAs assessment
    4. How close the review of their own code agrees with the TAs assessment

From my viewpoint, this model has several obvious strengths and weaknesses.

One major strength is that, even if students do poorly in the coding portion of their assignment, they might still have an opportunity to make it up by learning from their peers during the group consensus review.  They’ll also have an opportunity to demonstrate their new-found understanding by reviewing their own code, and admitting the shortcomings.

A major weakness of the idea is the sheer organizational complexity.  Did you see how many steps there are?  That’s a lot of work.

Plus, the model makes some pretty wild assumptions.  A few off the top of my head:

  • It assumes students can actually learn by performing peer review together, independent of what piece of code they’re reviewing
  • It assumes students will actually reach a final consensus during the group review.  What about bullies?  What about timid folks?

There are probably more that I’m not seeing yet.

Anyhow, this was an idea I had a few days ago, and I just wanted to write it down.

Lessons from peerScholar: An Approach to Teaching Code Review

We Don’t Know How To Teach Code Review

If you go to my very first blog post about code review, you’ll discover what my original research question was:

Code reviews. They can help make our software better. But how come I didn’t learn about them, or perform them in my undergrad courses?  Why aren’t they taught as part of the software engineering lifecycle right from the get-go?  I learn about version control, but why not peer code review?  Has it been tried in the academic setting?  If so, why hasn’t it succeeded and become part of the general CS curriculum?  If it hasn’t been tried, why not?  What’s the hold up?  What’s the problem?

I have mulled the question for months, and read several papers that discuss different models for introducing code review into the classroom.

But I’m no teacher.  I really don’t know what it’s like to run a university level course.  Thankfully, two course instructors from our department gave their input on the difficulty of introducing peer code review in the classroom.  Here’s the first:

The problem is that is completely un-assessable. You can’t get the students to hand in reports from their inspection, and grade them on it, because they quickly realise it’s easier to fake their reports than it is to do a real code inspection. And the assignment never gets them to understand and internalize the real reasons for doing code inspection – here they just do it to jump through an artificial hoop set by the course instructor.

What we really need to do is to assess code quality, and let them figure out for themselves how the various tools we show them (e.g. test-case first, code inspection, etc) will help them achieve that quality. Better still, we give them ways of measuring directly how the various tools they use affect code quality for each assignment. But I haven’t thought enough yet about how to achieve this.

So, I’ve long since dropped the idea of a specific marked assignment on code inspections, but still teach inspection in all of my SE courses. I need to find a way to teach it so that the students themselves understand why it’s so useful.

(From Steve Easterbrook, commenting on this post)

And here’s the second:

1. How many different tasks can we ask students to do on a 3-week assignment? I think students should learn to use an IDE, a debugger, version control, and a ticket system. We have been successful in getting students to use version control because that’s the only way they can submit an assignment. We have had mixed success getting students to use IDE’s and debuggers, partly because it is hard to assign marks for their use. We have been even less successful in convincing students to use tickets because a 3-week assignment isn’t big enough or long enough to make tickets essential.

2. If the focus of my course is teaching operating systems, how much time (and grades) should I devote to software development tools and practices that aren’t centered on operating systems?

(From Karen Reid, commenting on this post)

All of this swirls around a possible answer that Greg Wilson and I have been approaching since September:

What if peer code review isn’t taught in undergraduate courses because we just don’t know how to teach it?  We don’t know how to fit it in to a curriculum that’s already packed to the brim.  We don’t know how to get students to take it seriously.  We don’t know if there’s pedagogical value, let alone how to show such value to the students.

If that’s really the problem… Greg and I may have come up with a possible solution.

But First, Some Background

In 2008, Steve Joordens and Dwayne Pare published Peering into Large Lectures:  Examining Peer and Expert Mark Agreement Using peerScholar, an Online Peer Assessment Tool.

It’s a good read, but in the interests of brevity, I’ll break it down for you:

  1. Joordens and Pare are both at the University of Toronto Scarborough, in the Psych Department
  2. Psych classes (especially for the first year) are large.  For large classes, it is generally difficult to introduce writing assignments simply due to the sheer volume of writing that would need to be marked by the TAs.  Alternatives (like multiple-choice tests) are often used to counteract this.
  3. But writing is important.
  4. The idea:  what if we let students grade one another?  There’s research showing the benefits of peer evaluation for writing assignments.  So lets see what kind of grades peers give to one another.
  5. A tool is built (peerScholar), and an experiment is run:  after submitting their writing assignments, show students 5 submissions from other students, and have them grade the work (with specific grading instructions from the instructor).  Then, compare the grades that the students gave with grades from the TAs.
  6. A significant positive correlation was found between averaged TA marks and average peer marks.  More statistical analysis shows that there is no significant difference between the agreement levels of TA and peer markers.
  7. To ensure repeatability, a second experiment is run – similar to the first.  Except, this time, students who receive the marks from their peers are able to “mark the marker” and flag any marks that seem suspicious (a 1/10, for example, if all the other students and the TA gave something closer to a 7/10).
  8. It looks good – numbers were closer this time.
  9. Conclusion:  the average grade grade given by a set of peer markers was similar to the grade given by the TAs in terms of overall level and rank ordering of assignments.

This is a very interesting result.  Why can’t we apply it to courses in a computer science department?  What if students started marking each others code?

What they’d be doing would be called code review.

The Idea

Let’s modify Joorden and Pare’s model a little bit.

Let’s say I’m teaching an undergraduate computer science course where students tend to do quite a bit of coding.  Traditionally, source code written by students would be collected through some mechanism or another, be marked by TAs, and then be returned to students after a few weeks.

What if, after all of the submissions have been collected, each student must anonymously grade 5 submissions, chosen randomly from the system (with the only stipulation that students cannot grade their own work).

But here’s the twist:

Instead of just calculating a mark for students based on the peer reviews that they get, how about we mark the students based on the reviews that they give – specifically, based on how close they are to generating the same marks that the TAs give?

So now a students mark will be partially based on how well they are able to review code.

Questions / Answers (or Concerns / Freebies)

I can think of a few initial concerns with this idea.

Q: What if the TA makes a huge mistake, or makes an oversight?  They’re not infallible.  How can students possibly make the same mistake / give the same mark?

A: I agree that TAs are not infallible.  Nobody is.  However, if a TA gives a submission a 3/10, and the rest of the students give 9/10’s, this is useful information.  It either means that the TA missed something, or might signal that the students in general have not learned something crucial.  In either case, this sort of problem can be easily detected, and sorted out via human intervention.

Q: What if students game the system by just giving their peers all 10/10’s, or try to screw each other by just giving 0/10’s?

A: Remember, students are being marked on their ability to review.  If the TAs gave a more appropriate mark, and a student starts behaving as above, they’re going to get a poor reviewing mark.  No harm done to the reviewee.

Q: I’m already swamped.  How can I cram a system like this into my course?

A: I’m one of the developers on MarkUs, a tool that is being used to grade source code for students at the University of Toronto and the University of Waterloo.  It would not be impossible to adapt MarkUs to follow this model.  Through MarkUs, a lot of this idea can be automated.  Besides some possible human intervention for edge cases, I don’t see there being a whole lot of course-admin overhead to get this sort of thing going.  But it does mean a little bit more work for students who have to review the code.

Q: This is nice in theory, but is there any real pedagogical value in this?  And if so, how can I show it to my students?

A: First off, as a recent undergraduate student at UofT, I must say how rare it is to be given the opportunity to read another student’s code.  It just doesn’t happen much.  I would have found it interesting – I’d be able to see the techniques that my peers employed to solve the same problems that I was trying to solve.  It would give me a good informal measuring stick to see how I rank in the class – and students always want to know how they rank in the class.

Would they learn anything from it though?

That’s a good question.  Would students learn anything from this, and realize the benefits?  Remember – that’s what Steve Easterbrook says was the major stumbling block to introducing peer review…we have to show them that it’s useful.

The Questions

  • How good are students at grading their peers?  How close to they get to the grades that a TA would give?
    • By study year
    • By their perceived programming ability
    • By their perceived programming experience
    • By their programming confidence
  • What happens to students’ ability to review their peers as they perform each review?  Do they get better after each one?  And is there a point where their accuracy gets poorer from fatigue?
  • How many student reviewers are needed to approximate the grade that a TA would give?
  • How long do students generally take to peer review code? (bonus)
  • How long do graduate students generally take to mark an assignment? (bonus)
  • Do the students actually learn anything from the process?
  • How do the students feel about being graded on their ability to review?
    • Do they think that this process is fair?
    • Do they think that they’re learning anything useful?
    • Do they feel like it is worth their time?
    • Do they enjoy reading other students’ code?
    • If it was introduced into their classes, how would they feel?

Lots of questions.  Luckily, it just so happens that I’m a scientist.

The Experiment

First, I mock up (or procure) 10 submissions for a programming assignment that our undergraduates might write.

I then get/convince some graduate students to grade those 10 submissions to the best of their ability, using MarkUs.  These marks are recorded.

I then take a cross-section of our undergraduate student body, and (after a brief survey to determine their opinions of their coding experience/confidence), I get the students to peer review and grade those 10 submissions.  They will be told that their goal is to try to give the same type of marks that a graduate student TA might give.

After the grades are recorded, I take the submission that they reviewed first, and get them to grade it again.  Do they get closer to the TAs mark than their first attempt?

Students are then given a second survey (probably Likert scales) to assess their opinions on the process.  Would it be fair if their ability to grade was part of their mark?  Did you get anything useful out of this?  Did you feel that it was worth your time?  Did you enjoy reading other students’ code?  How would you feel if it was part of your class?  …

The final survey will (hopefully) knock out the last series of questions in my list.  Timing information recorded during marking will help answer the bonus questions.  Analysis of the marks that the students give in relation to the marks that the TA give will hopefully help answer the rest.

What Am I Missing?

Am I missing anything here?  Is there a gaping hole in my thinking somewhere?  Would this be a good, interesting experiment to run?  For those who teach…if my results are encouraging, would you ever try implementing this in your classroom?

And if this was introduced into the classroom…what would happen to student learning?  What would happen to marks?  How would instructors like it?

So, what do you think?  I’m all ears.

ReviewBoard Extensions: A State of Affairs

One of my potential research experiments involves augmenting ReviewBoard, and so, I’ve been studying the ReviewBoard code.

I want to give ReviewBoard the ability to record certain statistics – for example, the number of defects in a review request, number of defects found per reviewer, defect density, inspection rate, etc…

It turns out I’m not the only one who wants this to happen.  Check out this thread.

So it turns out that the devs building ReviewBoard are planning on integrating an extension framework.  Theoretically, users could then write their own extensions to customize ReviewBoard as they see fit.  Think WordPress plugins, but for a code review tool.  Neat.

According to the above link:

A lot of this [extension framework] already exists in a private development branch, and it will be one of our primary focuses as soon as 1.0 goes out.

However, I found the following quote from a relatively recent mailing list message:

[with regards to the extensions framework]…We have to finish building it, testing
it, getting other features we have planned into the release, and performing
the release. So, we’re looking at a year or more. But I don’t see how we’d
get the functionality you want short-term.

Hrmph.  My thinking:  maybe I can help them with it, and we can crank this puppy out sooner.

So where is this private branch?  It actually took a little bit of detective work…

Detective Work

I started by looking at the ReviewBoard repo on Github, which I assumed that the extensions branch would fork from.

I was right – check out the fork network.  There’s a branch helpfully labeled “extensions” forked by davidt, one of the ReviewBoard devs.

Two observations:

  1. This fork hasn’t been updated in about a month
  2. The extensions stuff references stuff from Djblets which doesn’t exist in the Djblets master branch

So I perform the same exercise – I glance at the Djblets fork network, and see the handily named “extensions” fork that is in davidt’s git repo.  Nice.

Ok, so those are the forks/branches that I’m going to grab.

A Fluke

So, funny story:  I was able to get this “extension” version of ReviewBoard working on my work machine.  But it was by a complete fluke – this extension version of Djblets is actually missing some pieces.  The only reason it worked for me was because I accidentally ran with the master checkout of Djblets first.  This errored out, and also generated a bunch of compiled Python.  I then switched to the extensions branch, which left the compiled Python behind.  The compiled Python filled all of the missing pieces that are in the extensions branch of Djblets, and so it ran.

For a guy who is pretty new to both Git and Django, I found that problem pretty frustrating to track down.  Especially when I started writing up the checkout instructions… big thanks to Zuzel for her invaluable (and patient) Django guidance.

Anyhow, if you’re interested in the steps to failure, here they are…

Setting Up (for failure)

I’m running Ubuntu Karmic.  I have Git installed.

  1. First of all, this document is invaluable
  2. Check out django_evolution from SVN to some local directory
  3. Put path to django-evolution into PYTHONPATH (export PYTHONPATH=$PYTHONPATH:/…/django-evolution-read-only/)
  4. Get Python Setuptools (sudo apt-get install python-setuptools)
  5. With easy_install, get nose, paramiko, recaptcha-client, Sphinx (sudo easy_install nose paramiko recaptcha-client sphinx)
  6. Clone original master Djblets repo (git clone git://github.com/djblets/djblets.git) and CD into djblets directory
  7. Add davidt’s remote Djblets branch (git remote add davidt git://github.com/davidt/djblets.git)
  8. Fetch davidt’s repo  (git fetch davidt)
  9. Track davidt’s extension branch for djblets (git branch extensions davidt/extensions)
  10. In djblets root dir, type:  sudo python setup.py develop
  11. This should install Django too.  The ReviewBoard install doc says to install from Subversion, but this will do for now.
  12. Clone ReviewBoard git repo (git clone git://github.com/reviewboard/reviewboard.git) and Cd into it
  13. Add davidt’s remote Reviewboard branch (git remote add davidt git://github.com/davidt/reviewboard.git)
  14. Fetch davidt’s repo  (git fetch davidt)
  15. Track davidt’s extensions branch of reviewboard (git branch extensions davidt/extensions)
  16. Switch to extensions branch (git checkout extensions)
  17. So reviewboard/reviewboard/extensions should have stuff in it.
  18. python ./contrib/internal/prepare-dev.py
  19. Manually create the directory reviewboard/reviewboard/htdocs/media/ext.  ReviewBoard expects it and will complain if it isn’t there.
  20. Run ./contrib/internal/devserver.sh
  21. Go to localhost:8080

If you’re like me, you’ll see an error about “no module named urls”.  This is because the extension branch of Djblets is missing urls.py in its djblets/log directory.  As you can see, urls.py exists in the master Djblets branch.

So I’ve accidentally got it running.  But this version of ReviewBoard I’ve created is like Frankenstein’s monster – it’s a sad, stitched together monstrosity that should probably never see the light of day.

I tried to merge the master branch of Djblets and ReviewBoard into the extensions branch, and then realized that I had no idea what I was doing.  The last guy you want doing a merge is a guy who doesn’t know what he’s doing.

So that’s where I stand with it.  Kind of a sad affair, really.  I’ve posted on the ReviewBoard developer mailing list asking for some guidance, but all is quiet so far.

In the meantime, I’ll continue studying the code when I can.  The extensions stuff is actually pretty far along, in my opinion – but what do I know, I’ve never built one.  I’ll also take a peek at some other examples of extension frameworks to see what I can learn about implementation (that’s right – I’m talking about you, WordPress).

I’ll let you know what I find.

Turning Peer Code Review into a Game

A little while back, I wrote about an idea that a few of us had been bouncing around:  peer code review achievements.

It started out as a bit of Twitter fun – but now it has evolved, and actually become a contender for my Masters research.

So I’ve been reading up on reputation and achievement systems, and it’s been keeping me up at night.  I’ve been tossing and turning, trying to figure out a way of applying these concepts to something like ReviewBoard.  Is there a model that will encourage users to post review requests early and often?  Is there a model that will encourage more thorough reviews from other developers?

An idea eventually sprung to mind…

Idea 1:  2 Week Games

In a speech he gave a few years ago, Danc of The Last Garden placed the following bet:

  • If an activity can be learned…
  • If the player’s performance can be measured…
  • If the player can be rewarded or punished in a timely fashion…
  • Then any activity that meets these criteria can be turned into a game.

Let’s work off of this premise.

Modeled on the idea of a sprint or iteration, let’s say that ReviewBoard has “games” that last 2 weeks.

In a game, users score points in the following way:

  • Posting a review request that eventually gets committed gives the author 1 point
  • A review request that is given a ship-it, without a single defect found, gives the author The 1.5 Multiplier on their total points.  The 1.5 Multiplier can be stolen by another player if they post a review request that also gets a ship-it without any defects being found.
  • Any user can find/file defects on a review request
  • A defect must be “confirmed” by the author, or “withdrawn” by the defect-finder.
  • After a diff has been updated, “confirmed” defects can be “fixed”.  Each fixed defect gives the defect-finder and author 1 point each.

After two weeks, a final tally is made, achievements / badges are doled out, and the scores are reset.  A new game begins.  Users can view their point history and track their performance over time.

Granted, this game is open to cheating.  But so is Monopoly.  I can reach into the Monopoly bank and grab $500 without anybody noticing.  It’s up to me not to do that, because it invalidates the game.  In this case, cheating would only result in bad morale and a poorer piece of software.  And since scores are reset every two weeks, what’s the real incentive to cheat?

Idea 2:  Track My Performance

I’ve never built a reputation system before – but Randy Farmer and Bryce Glass have.  They’ve even written a book about it.

Just browsing through their site, I’m finding quotes that suggests that there are some potential problems with my two week game idea.  In particular, I have not considered the potentially harmful effects of displaying “points” publicly on a leader-board.

According to Farmer / Glass:

It’s still too early to speak in absolutes about the design of social-media sites, but one fact is becoming abundantly clear: ranking the members of your community-and pitting them one-against-the-other in a competitive fashion-is typically a bad idea. Like the fabled djinni of yore, leaderboards on your site promise riches (comparisons! incentives! user engagement!!) but often lead to undesired consequences.

They go into more detail here.

Ok, so let’s say that they’re right.  Then how about instead of pitting the reviewers against one another, I have the reviewers compete against themselves?

Ever played Wii Sports?  It tracks player performance on various games and displays it on a chart.  It’s really easy to see / track progress over time.  It’s also an incentive to keep performance up – because nobody wants to go below the “Pro” line.

So how about we just show users a report of their performance over fixed time intervals…with fancy jQuery charts, etc?

So what?

Are either of these ideas useful?  Would they increase the number of defects found per review request?  Would they increase the frequency and speed of reviews?  Would they improve user perception of peer code review?  Would it be ignored?  Or could it harm a team of developers?  What are the benefits and drawbacks?

If anything, it’d give ReviewBoard some ability to record metrics, which is handy when you want to show the big boss how much money you’re saving with code review.

Might be worth looking into.  Thoughts?