Tag Archives: experiment

My Experiment Apparatus: The Assignments, Rubrics and Mock-Ups

If you’ve read about my experiment, you’ll know that there were two Python programming assignments that my participants worked on, and a rubric for each assignment.

There were also 5 mock-up submissions for each assignment that I had my participants grade.  I developed these mock-ups, after a few consultations with some of our undergraduate instructors, in order to get a sense of the kind of code that undergraduate programmers tend to submit.

I’ve decided to post these materials to this blog, in case somebody wants to give them a once over.  Just thought I’d open my science up a little bit.

So here they are:

Flights and Passengers

Cards and Decks

Peruse at your leisure.

Research Experiment: A Recap

Before I start diving into results, I’m just going to recap my experiment so we’re all up to speed.

I’ll try to keep it short, sweet, and punchy – but remember, this is a couple of months of work right here.

Ready?  Here we go.

What I was looking for

A quick refresher on what code review is

Code review is like the software industry equivalent of a taste test.  A developer makes a change to a piece of software, puts that change up for review, and a few reviewers take a look at that change to make sure it’s up to snuff.  If some issues are found during the course of the review, the developer can go back and make revisions.  Once the reviewers give it the thumbs up, the change is put into the software.

That’s an oversimplified description of code review,  but it’ll do for now.

So what?

What’s important is to know that it works. Jason Cohen showed that code review reduces the number of defects that enter the final software product. That’s great!

But there are some other cool advantages to doing code review as well.

  1. It helps to train up new hires.  They can lurk during reviews to see how more experienced developers look at the code.  They get to see what’s happening in other parts of the software.  They get their code reviewed, which means direct, applicable feedback.  All good things.
  2. It helps to clean and homogenize the code.  Since the code will be seen by their peers, developers are generally compelled to not put up “embarrassing” code (or, if they do, to at least try to explain why they did).  Code review is a great way to compel developers to keep their code readable and consistent.
  3. It helps to spread knowledge and good practices around the team.  New hires aren’t the only ones to benefit from code reviews.  There’s always something you can learn from another developer, and code review is where that will happen.  And I believe this is true not just for those who receive the reviews, but also for those who perform the reviews.

That last one is important.  Code review sounds like an excellent teaching tool.

So why isn’t code review part of the standard undergraduate computer science education?  Greg and I hypothesized that the reason that code review isn’t taught is because we don’t know how to teach it.

I’ll quote myself:

What if peer code review isn’t taught in undergraduate courses because we just don’t know how to teach it?  We don’t know how to fit it in to a curriculum that’s already packed to the brim.  We don’t know how to get students to take it seriously.  We don’t know if there’s pedagogical value, let alone how to show such value to the students.

The idea

Inspired by work by Joordens and Pare, Greg and I developed an approach to teaching code review that integrates itself nicely into the current curriculum.

Here’s the basic idea:

Suppose we have a computer programming class.  Also suppose that after each assignment, each student is randomly presented with anonymized assignment submissions from some of their peers.  Students will then be asked to anonymously peer grade these assignment submissions.

Now, before you go howling your head off about the inadequacy / incompetence of student markers, or the PeerScholar debacle, read this next paragraph, because there’s a twist.

The assignment submissions will still be marked by TA’s as usual.  The grades that a student receives from her peers will not directly affect her mark.  Instead, the student is graded based on how well they graded their peers. The peer reviews that a student completes will be compared with the grades that the TA’s delivered.  The closer a student is to the TA, the better the mark they get on their “peer grading” component (which is distinct from the mark they receive for their programming assignment).

Now, granted, the idea still needs some fleshing out, but already, we’ve got some questions that need answering:

  1. Joordens and Pare showed that for short written assignments, you need about 5 peer reviews to predict the mark that the TA will give.  Is this also true for computer programming assignments?
  2. Grading students based on how much their peer grading matches TA grading assumes that the TA is an infallible point of reference.  How often to TA’s disagree amongst themselves?
  3. Would peer grading like this actually make students better programmers?  Is there a significant difference in the quality of their programming after they perform the grading?
  4. What would students think of peer grading computer programming assignments?  How would they feel about it?

So those were my questions.

How I went about looking for the answers

Here’s the design of the experiment in a nutshell:

Writing phase

I have a treatment group, and a control group.  Both groups are composed of undergraduate students.  After writing a short pre-experiment questionnaire, participants in both groups will have half an hour to work on a short programming assignment.  The treatment group will then have another half an hour to peer grade some submissions for the assignment they just wrote.  The submissions that they mark will be mocked up by me, and will be the same for each participant in the treatment group.  The control group will not perform any grading – instead, they will do an unrelated vocabulary exercise for the same amount of time.  Then, participants in either group will have another half an hour to work on the second short programming assignment. Participants in my treatment group will write a short post-experiment questionnaire to get their impressions on their peer grading experience.  Then the participants are released.

Here’s a picture to help you visualize what you just read.

Tasks for each group in my experiment.

So now I’ve got two piles of submissions – one for each assignment, 60 submissions in total.  I add my mock-ups to each pile.  That means 35 submissions in each pile, and 70 submissions in total.

Marking phase

I assign ID numbers to each submission, shuffle them up, and hand them off to some graduate level TA’s that I hired.  The TA’s will grade each assignment using the same marking rubric that the treatment group used to peer grade.  They will not know if they are grading a treatment group submission, a control group submission, or a mock-up.

Choosing phase

After the grading is completed, I remove the mock-ups, and pair up submissions in both piles based on who wrote it.  So now I’ve got 30 pairs of submissions:  one for each student.  I then ask my graders to look at each pair, knowing that they’re both written by the same student, and to choose which one they think is better coded, and to rate and describe the difference (if any) between the two.  This is an attempt to catch possible improvements in the treatment group’s code that might not be captured in the marking rubric.

So that’s what I did

So everything you’ve just read is what I’ve just finished doing.

Once the submissions are marked, I’ll analyze the marks for the following:

  1. Comparing the two groups, is there any significant improvement in the marks from the first assignment to the second in the treatment group?
    1. If there was an improvement, on which criteria?  And how much of an improvement?
  2. How did the students do at grading my mock-ups?  How similar were their peer grades to what the TAs gave?
  3. How much did my two graders agree with one another?
  4. During the choosing phase, did my graders tend to choose the second assignment over the first assignment more often for the treatment group?

And I’ll also analyze the post-experiment questionnaire to get student feedback on their grading experience.

Ok, so that’s where I’m at.  Stay tuned for results.

Lessons from peerScholar: An Approach to Teaching Code Review

We Don’t Know How To Teach Code Review

If you go to my very first blog post about code review, you’ll discover what my original research question was:

Code reviews. They can help make our software better. But how come I didn’t learn about them, or perform them in my undergrad courses?  Why aren’t they taught as part of the software engineering lifecycle right from the get-go?  I learn about version control, but why not peer code review?  Has it been tried in the academic setting?  If so, why hasn’t it succeeded and become part of the general CS curriculum?  If it hasn’t been tried, why not?  What’s the hold up?  What’s the problem?

I have mulled the question for months, and read several papers that discuss different models for introducing code review into the classroom.

But I’m no teacher.  I really don’t know what it’s like to run a university level course.  Thankfully, two course instructors from our department gave their input on the difficulty of introducing peer code review in the classroom.  Here’s the first:

The problem is that is completely un-assessable. You can’t get the students to hand in reports from their inspection, and grade them on it, because they quickly realise it’s easier to fake their reports than it is to do a real code inspection. And the assignment never gets them to understand and internalize the real reasons for doing code inspection – here they just do it to jump through an artificial hoop set by the course instructor.

What we really need to do is to assess code quality, and let them figure out for themselves how the various tools we show them (e.g. test-case first, code inspection, etc) will help them achieve that quality. Better still, we give them ways of measuring directly how the various tools they use affect code quality for each assignment. But I haven’t thought enough yet about how to achieve this.

So, I’ve long since dropped the idea of a specific marked assignment on code inspections, but still teach inspection in all of my SE courses. I need to find a way to teach it so that the students themselves understand why it’s so useful.

(From Steve Easterbrook, commenting on this post)

And here’s the second:

1. How many different tasks can we ask students to do on a 3-week assignment? I think students should learn to use an IDE, a debugger, version control, and a ticket system. We have been successful in getting students to use version control because that’s the only way they can submit an assignment. We have had mixed success getting students to use IDE’s and debuggers, partly because it is hard to assign marks for their use. We have been even less successful in convincing students to use tickets because a 3-week assignment isn’t big enough or long enough to make tickets essential.

2. If the focus of my course is teaching operating systems, how much time (and grades) should I devote to software development tools and practices that aren’t centered on operating systems?

(From Karen Reid, commenting on this post)

All of this swirls around a possible answer that Greg Wilson and I have been approaching since September:

What if peer code review isn’t taught in undergraduate courses because we just don’t know how to teach it?  We don’t know how to fit it in to a curriculum that’s already packed to the brim.  We don’t know how to get students to take it seriously.  We don’t know if there’s pedagogical value, let alone how to show such value to the students.

If that’s really the problem… Greg and I may have come up with a possible solution.

But First, Some Background

In 2008, Steve Joordens and Dwayne Pare published Peering into Large Lectures:  Examining Peer and Expert Mark Agreement Using peerScholar, an Online Peer Assessment Tool.

It’s a good read, but in the interests of brevity, I’ll break it down for you:

  1. Joordens and Pare are both at the University of Toronto Scarborough, in the Psych Department
  2. Psych classes (especially for the first year) are large.  For large classes, it is generally difficult to introduce writing assignments simply due to the sheer volume of writing that would need to be marked by the TAs.  Alternatives (like multiple-choice tests) are often used to counteract this.
  3. But writing is important.
  4. The idea:  what if we let students grade one another?  There’s research showing the benefits of peer evaluation for writing assignments.  So lets see what kind of grades peers give to one another.
  5. A tool is built (peerScholar), and an experiment is run:  after submitting their writing assignments, show students 5 submissions from other students, and have them grade the work (with specific grading instructions from the instructor).  Then, compare the grades that the students gave with grades from the TAs.
  6. A significant positive correlation was found between averaged TA marks and average peer marks.  More statistical analysis shows that there is no significant difference between the agreement levels of TA and peer markers.
  7. To ensure repeatability, a second experiment is run – similar to the first.  Except, this time, students who receive the marks from their peers are able to “mark the marker” and flag any marks that seem suspicious (a 1/10, for example, if all the other students and the TA gave something closer to a 7/10).
  8. It looks good – numbers were closer this time.
  9. Conclusion:  the average grade grade given by a set of peer markers was similar to the grade given by the TAs in terms of overall level and rank ordering of assignments.

This is a very interesting result.  Why can’t we apply it to courses in a computer science department?  What if students started marking each others code?

What they’d be doing would be called code review.

The Idea

Let’s modify Joorden and Pare’s model a little bit.

Let’s say I’m teaching an undergraduate computer science course where students tend to do quite a bit of coding.  Traditionally, source code written by students would be collected through some mechanism or another, be marked by TAs, and then be returned to students after a few weeks.

What if, after all of the submissions have been collected, each student must anonymously grade 5 submissions, chosen randomly from the system (with the only stipulation that students cannot grade their own work).

But here’s the twist:

Instead of just calculating a mark for students based on the peer reviews that they get, how about we mark the students based on the reviews that they give – specifically, based on how close they are to generating the same marks that the TAs give?

So now a students mark will be partially based on how well they are able to review code.

Questions / Answers (or Concerns / Freebies)

I can think of a few initial concerns with this idea.

Q: What if the TA makes a huge mistake, or makes an oversight?  They’re not infallible.  How can students possibly make the same mistake / give the same mark?

A: I agree that TAs are not infallible.  Nobody is.  However, if a TA gives a submission a 3/10, and the rest of the students give 9/10’s, this is useful information.  It either means that the TA missed something, or might signal that the students in general have not learned something crucial.  In either case, this sort of problem can be easily detected, and sorted out via human intervention.

Q: What if students game the system by just giving their peers all 10/10’s, or try to screw each other by just giving 0/10’s?

A: Remember, students are being marked on their ability to review.  If the TAs gave a more appropriate mark, and a student starts behaving as above, they’re going to get a poor reviewing mark.  No harm done to the reviewee.

Q: I’m already swamped.  How can I cram a system like this into my course?

A: I’m one of the developers on MarkUs, a tool that is being used to grade source code for students at the University of Toronto and the University of Waterloo.  It would not be impossible to adapt MarkUs to follow this model.  Through MarkUs, a lot of this idea can be automated.  Besides some possible human intervention for edge cases, I don’t see there being a whole lot of course-admin overhead to get this sort of thing going.  But it does mean a little bit more work for students who have to review the code.

Q: This is nice in theory, but is there any real pedagogical value in this?  And if so, how can I show it to my students?

A: First off, as a recent undergraduate student at UofT, I must say how rare it is to be given the opportunity to read another student’s code.  It just doesn’t happen much.  I would have found it interesting – I’d be able to see the techniques that my peers employed to solve the same problems that I was trying to solve.  It would give me a good informal measuring stick to see how I rank in the class – and students always want to know how they rank in the class.

Would they learn anything from it though?

That’s a good question.  Would students learn anything from this, and realize the benefits?  Remember – that’s what Steve Easterbrook says was the major stumbling block to introducing peer review…we have to show them that it’s useful.

The Questions

  • How good are students at grading their peers?  How close to they get to the grades that a TA would give?
    • By study year
    • By their perceived programming ability
    • By their perceived programming experience
    • By their programming confidence
  • What happens to students’ ability to review their peers as they perform each review?  Do they get better after each one?  And is there a point where their accuracy gets poorer from fatigue?
  • How many student reviewers are needed to approximate the grade that a TA would give?
  • How long do students generally take to peer review code? (bonus)
  • How long do graduate students generally take to mark an assignment? (bonus)
  • Do the students actually learn anything from the process?
  • How do the students feel about being graded on their ability to review?
    • Do they think that this process is fair?
    • Do they think that they’re learning anything useful?
    • Do they feel like it is worth their time?
    • Do they enjoy reading other students’ code?
    • If it was introduced into their classes, how would they feel?

Lots of questions.  Luckily, it just so happens that I’m a scientist.

The Experiment

First, I mock up (or procure) 10 submissions for a programming assignment that our undergraduates might write.

I then get/convince some graduate students to grade those 10 submissions to the best of their ability, using MarkUs.  These marks are recorded.

I then take a cross-section of our undergraduate student body, and (after a brief survey to determine their opinions of their coding experience/confidence), I get the students to peer review and grade those 10 submissions.  They will be told that their goal is to try to give the same type of marks that a graduate student TA might give.

After the grades are recorded, I take the submission that they reviewed first, and get them to grade it again.  Do they get closer to the TAs mark than their first attempt?

Students are then given a second survey (probably Likert scales) to assess their opinions on the process.  Would it be fair if their ability to grade was part of their mark?  Did you get anything useful out of this?  Did you feel that it was worth your time?  Did you enjoy reading other students’ code?  How would you feel if it was part of your class?  …

The final survey will (hopefully) knock out the last series of questions in my list.  Timing information recorded during marking will help answer the bonus questions.  Analysis of the marks that the students give in relation to the marks that the TA give will hopefully help answer the rest.

What Am I Missing?

Am I missing anything here?  Is there a gaping hole in my thinking somewhere?  Would this be a good, interesting experiment to run?  For those who teach…if my results are encouraging, would you ever try implementing this in your classroom?

And if this was introduced into the classroom…what would happen to student learning?  What would happen to marks?  How would instructors like it?

So, what do you think?  I’m all ears.