Lessons from peerScholar: An Approach to Teaching Code Review

We Don’t Know How To Teach Code Review

If you go to my very first blog post about code review, you’ll discover what my original research question was:

Code reviews. They can help make our software better. But how come I didn’t learn about them, or perform them in my undergrad courses?  Why aren’t they taught as part of the software engineering lifecycle right from the get-go?  I learn about version control, but why not peer code review?  Has it been tried in the academic setting?  If so, why hasn’t it succeeded and become part of the general CS curriculum?  If it hasn’t been tried, why not?  What’s the hold up?  What’s the problem?

I have mulled the question for months, and read several papers that discuss different models for introducing code review into the classroom.

But I’m no teacher.  I really don’t know what it’s like to run a university level course.  Thankfully, two course instructors from our department gave their input on the difficulty of introducing peer code review in the classroom.  Here’s the first:

The problem is that is completely un-assessable. You can’t get the students to hand in reports from their inspection, and grade them on it, because they quickly realise it’s easier to fake their reports than it is to do a real code inspection. And the assignment never gets them to understand and internalize the real reasons for doing code inspection – here they just do it to jump through an artificial hoop set by the course instructor.

What we really need to do is to assess code quality, and let them figure out for themselves how the various tools we show them (e.g. test-case first, code inspection, etc) will help them achieve that quality. Better still, we give them ways of measuring directly how the various tools they use affect code quality for each assignment. But I haven’t thought enough yet about how to achieve this.

So, I’ve long since dropped the idea of a specific marked assignment on code inspections, but still teach inspection in all of my SE courses. I need to find a way to teach it so that the students themselves understand why it’s so useful.

(From Steve Easterbrook, commenting on this post)

And here’s the second:

1. How many different tasks can we ask students to do on a 3-week assignment? I think students should learn to use an IDE, a debugger, version control, and a ticket system. We have been successful in getting students to use version control because that’s the only way they can submit an assignment. We have had mixed success getting students to use IDE’s and debuggers, partly because it is hard to assign marks for their use. We have been even less successful in convincing students to use tickets because a 3-week assignment isn’t big enough or long enough to make tickets essential.

2. If the focus of my course is teaching operating systems, how much time (and grades) should I devote to software development tools and practices that aren’t centered on operating systems?

(From Karen Reid, commenting on this post)

All of this swirls around a possible answer that Greg Wilson and I have been approaching since September:

What if peer code review isn’t taught in undergraduate courses because we just don’t know how to teach it?  We don’t know how to fit it in to a curriculum that’s already packed to the brim.  We don’t know how to get students to take it seriously.  We don’t know if there’s pedagogical value, let alone how to show such value to the students.

If that’s really the problem… Greg and I may have come up with a possible solution.

But First, Some Background

In 2008, Steve Joordens and Dwayne Pare published Peering into Large Lectures:  Examining Peer and Expert Mark Agreement Using peerScholar, an Online Peer Assessment Tool.

It’s a good read, but in the interests of brevity, I’ll break it down for you:

  1. Joordens and Pare are both at the University of Toronto Scarborough, in the Psych Department
  2. Psych classes (especially for the first year) are large.  For large classes, it is generally difficult to introduce writing assignments simply due to the sheer volume of writing that would need to be marked by the TAs.  Alternatives (like multiple-choice tests) are often used to counteract this.
  3. But writing is important.
  4. The idea:  what if we let students grade one another?  There’s research showing the benefits of peer evaluation for writing assignments.  So lets see what kind of grades peers give to one another.
  5. A tool is built (peerScholar), and an experiment is run:  after submitting their writing assignments, show students 5 submissions from other students, and have them grade the work (with specific grading instructions from the instructor).  Then, compare the grades that the students gave with grades from the TAs.
  6. A significant positive correlation was found between averaged TA marks and average peer marks.  More statistical analysis shows that there is no significant difference between the agreement levels of TA and peer markers.
  7. To ensure repeatability, a second experiment is run – similar to the first.  Except, this time, students who receive the marks from their peers are able to “mark the marker” and flag any marks that seem suspicious (a 1/10, for example, if all the other students and the TA gave something closer to a 7/10).
  8. It looks good – numbers were closer this time.
  9. Conclusion:  the average grade grade given by a set of peer markers was similar to the grade given by the TAs in terms of overall level and rank ordering of assignments.

This is a very interesting result.  Why can’t we apply it to courses in a computer science department?  What if students started marking each others code?

What they’d be doing would be called code review.

The Idea

Let’s modify Joorden and Pare’s model a little bit.

Let’s say I’m teaching an undergraduate computer science course where students tend to do quite a bit of coding.  Traditionally, source code written by students would be collected through some mechanism or another, be marked by TAs, and then be returned to students after a few weeks.

What if, after all of the submissions have been collected, each student must anonymously grade 5 submissions, chosen randomly from the system (with the only stipulation that students cannot grade their own work).

But here’s the twist:

Instead of just calculating a mark for students based on the peer reviews that they get, how about we mark the students based on the reviews that they give – specifically, based on how close they are to generating the same marks that the TAs give?

So now a students mark will be partially based on how well they are able to review code.

Questions / Answers (or Concerns / Freebies)

I can think of a few initial concerns with this idea.

Q: What if the TA makes a huge mistake, or makes an oversight?  They’re not infallible.  How can students possibly make the same mistake / give the same mark?

A: I agree that TAs are not infallible.  Nobody is.  However, if a TA gives a submission a 3/10, and the rest of the students give 9/10’s, this is useful information.  It either means that the TA missed something, or might signal that the students in general have not learned something crucial.  In either case, this sort of problem can be easily detected, and sorted out via human intervention.

Q: What if students game the system by just giving their peers all 10/10’s, or try to screw each other by just giving 0/10’s?

A: Remember, students are being marked on their ability to review.  If the TAs gave a more appropriate mark, and a student starts behaving as above, they’re going to get a poor reviewing mark.  No harm done to the reviewee.

Q: I’m already swamped.  How can I cram a system like this into my course?

A: I’m one of the developers on MarkUs, a tool that is being used to grade source code for students at the University of Toronto and the University of Waterloo.  It would not be impossible to adapt MarkUs to follow this model.  Through MarkUs, a lot of this idea can be automated.  Besides some possible human intervention for edge cases, I don’t see there being a whole lot of course-admin overhead to get this sort of thing going.  But it does mean a little bit more work for students who have to review the code.

Q: This is nice in theory, but is there any real pedagogical value in this?  And if so, how can I show it to my students?

A: First off, as a recent undergraduate student at UofT, I must say how rare it is to be given the opportunity to read another student’s code.  It just doesn’t happen much.  I would have found it interesting – I’d be able to see the techniques that my peers employed to solve the same problems that I was trying to solve.  It would give me a good informal measuring stick to see how I rank in the class – and students always want to know how they rank in the class.

Would they learn anything from it though?

That’s a good question.  Would students learn anything from this, and realize the benefits?  Remember – that’s what Steve Easterbrook says was the major stumbling block to introducing peer review…we have to show them that it’s useful.

The Questions

  • How good are students at grading their peers?  How close to they get to the grades that a TA would give?
    • By study year
    • By their perceived programming ability
    • By their perceived programming experience
    • By their programming confidence
  • What happens to students’ ability to review their peers as they perform each review?  Do they get better after each one?  And is there a point where their accuracy gets poorer from fatigue?
  • How many student reviewers are needed to approximate the grade that a TA would give?
  • How long do students generally take to peer review code? (bonus)
  • How long do graduate students generally take to mark an assignment? (bonus)
  • Do the students actually learn anything from the process?
  • How do the students feel about being graded on their ability to review?
    • Do they think that this process is fair?
    • Do they think that they’re learning anything useful?
    • Do they feel like it is worth their time?
    • Do they enjoy reading other students’ code?
    • If it was introduced into their classes, how would they feel?

Lots of questions.  Luckily, it just so happens that I’m a scientist.

The Experiment

First, I mock up (or procure) 10 submissions for a programming assignment that our undergraduates might write.

I then get/convince some graduate students to grade those 10 submissions to the best of their ability, using MarkUs.  These marks are recorded.

I then take a cross-section of our undergraduate student body, and (after a brief survey to determine their opinions of their coding experience/confidence), I get the students to peer review and grade those 10 submissions.  They will be told that their goal is to try to give the same type of marks that a graduate student TA might give.

After the grades are recorded, I take the submission that they reviewed first, and get them to grade it again.  Do they get closer to the TAs mark than their first attempt?

Students are then given a second survey (probably Likert scales) to assess their opinions on the process.  Would it be fair if their ability to grade was part of their mark?  Did you get anything useful out of this?  Did you feel that it was worth your time?  Did you enjoy reading other students’ code?  How would you feel if it was part of your class?  …

The final survey will (hopefully) knock out the last series of questions in my list.  Timing information recorded during marking will help answer the bonus questions.  Analysis of the marks that the students give in relation to the marks that the TA give will hopefully help answer the rest.

What Am I Missing?

Am I missing anything here?  Is there a gaping hole in my thinking somewhere?  Would this be a good, interesting experiment to run?  For those who teach…if my results are encouraging, would you ever try implementing this in your classroom?

And if this was introduced into the classroom…what would happen to student learning?  What would happen to marks?  How would instructors like it?

So, what do you think?  I’m all ears.

19 thoughts on “Lessons from peerScholar: An Approach to Teaching Code Review

  1. Pingback: The Third Bit » Blog Archive » What’s Wrong With This Plan?

  2. David Wolever

    That sounds cool, Mike. I’m looking forward to seeing the results.

    The only thing I’d be worried about is that, if I’m reviewing other people’s code and being assigned a grade based on how my mark compares with the TA’s mark, I would not longer be grading based on “how good I think the code is”, but rather “what I think the TA will think about the code.” In my case, this is because I’m used to doing “real world” code reviews… And, in the real world, 90% isn’t good enough (as Greg mentioned in his PyCon talk), and I’d be reviewing student code accordingly.

    Of course, if TAs started grading that way too… But, no, that’s probably a bit too much to hope for 😉

  3. Severin

    If I understand your idea correctly, then students would grade other student’s work after they’ve worked on the very same assignment for a couple of months (ahem, days? weeks?). 🙂 The following questions come to mind:

    Will students have digested the assignment task thoroughly enough to be able to do the same quality “grading” as an TA would be doing? What if students missed some subtle points of the assignment entirely? Code review/grading will largely depend on the students knowing the pitfalls of the assignment correctly.

    With respect to your planned study (this is probably related to my earlier concern): How will your subjects learn about the task of the assignment? Will they get briefed as to what the assignment was about and would proceed immediately with grading/reviewing?

    I assume you will be providing students with some guidance (similar to the one grad students get from the instructor) as to what to look for. If this will be provided, how will this influence overall outcome of your study?

    I really like the general idea. It would be great to see something along those lines in an undergrad curriculum at some point in future. To me, incorporating code review into undergrad curriculum would be most beneficial if students will have to use it in more than course. It’s like version control, initially one (the student) doesn’t really see the benefit. But when doing collaborative work in upper courses it becomes an invaluable aid. I could imagine that students would appreciate the value of code reading/review eventually.

    Great work! Looking forward to your follow-up posts!

  4. Rich

    Great idea.

    You mentioned as a starting point “mock up or procure” the code. Don’t “procure” actual student code unless you go through an IRB process to get formal approval from those students. Of course, you will have to get IRB approval to use students as subjects for the grading part. This is just a “heads up” in case you are not familiar with the IRB process. I’m speaking from the US side of the border so your rules may differ, but I suggest looking into it. It would be a shame to do the experiment and then not be allowed to publish the results.

  5. David Wolever

    “Code review/grading will largely depend on the students knowing the pitfalls of the assignment correctly.”

    I’d call that a feature, not a bug 😉

  6. Mike

    Thank you all for the input!

    @David/@Lorin:

    Thanks for bringing up the Keynesian Beauty Contest. I’ve been wrestling with that problem a bit too.

    My thinking, though, is that if students are learning how TAs tend to mark code, I argue that this is *still very useful* pedagogically.

    For the undergraduate courses that I’m thinking about, TAs tend to mark student code with a rubric marking scheme. Assuming TAs know “good code” when they see it, if students can learn what kind of code they must write for a TA to give them a high grade for a given rubric criterion, will it not encourage them to write similar code for their own marking benefit?

    So maybe it really falls on the TAs shoulders. It’s up to them to set the marking standards – and hopefully “what the TA will think about the code” will approach “what the code really is”.

  7. Mike

    @Severin:

    > Will students have digested the assignment task thoroughly enough to be able to do the same quality “grading” as an TA would be doing?

    I don’t know. Part of my experiment is to see if they can *learn* to do the same quality of grading.

    > What if students missed some subtle points of the assignment entirely? Code review/grading will largely depend on the students knowing the pitfalls of the assignment correctly.

    Then their reviews will be poorer, and their reviewing mark will probably be lower. But the same goes for their own code, for that matter.

    > How will your subjects learn about the task of the assignment? Will they get briefed as to what the assignment was about and would
    > proceed immediately with grading/reviewing?

    The experiment is still being designed, but yes, I imagine the graders should know what the assignment is all about – much like the students and markers would. And then yes, they would begin grading immediately.

    > I assume you will be providing students with some guidance (similar to the one grad students get from the instructor) as to what to look for.
    > If this will be provided, how will this influence overall outcome of your study?

    Good question. I imagine it’d be a lot like MarkUs…there’d be criteria, and each criterion level would have a description as to what is expected for that mark to apply.

    Thanks for the input, Sev!

  8. Mike

    @Rich:

    Yep, I’m writing up the research ethics paperwork as we speak. The submitted code will probably have to be mocked up – it’d just be too hairy, and take to long, to cut the red tape to get some real stuff.

    -Mike

  9. Severin

    >> How will your subjects learn about the task of the assignment? Will they get briefed as to what the assignment was about and would
    >> proceed immediately with grading/reviewing?

    > The experiment is still being designed, but yes, I imagine the graders should know what the assignment is all about – much like the students and markers would.
    > And then yes, they would begin grading immediately.

    Note: I’m speaking of what I’ve experienced when working on an assignment here:

    If the assignment is significant effort/significantly complex then fairly I noticed after spending some time working on the assignment that I hadn’t understood the task(s) in its entirety. Even after several read the specs, read the code, work on implementation, read the specs go to the code again cycles, I was unsure as to what instructors wanted us to be aware of. Then at some point and having had some “distance”, you have your personal “aha” moment as to what this is all about. It could just be me, but if the experiment will be in a way that student subjects read the assignment specs and would immediately start reviewing other student’s code after that I anticipate different outcomes of your experiment as if students had worked on the assignment for a couple of days and, hence, have a clearer idea and do better code review.

    Of course this is a rather experiment specific concern. But if you are trying to assess students’ ability to do code review after having worked on an assignment your experiment should reflect this. I could have gotten something wrong. 🙂

    >> I assume you will be providing students with some guidance (similar to the one grad students get from the instructor) as to what to look for.
    >> If this will be provided, how will this influence overall outcome of your study?

    > Good question. I imagine it’d be a lot like MarkUs…there’d be criteria, and each criterion level would have a description as to what is expected for that mark to apply.

    Ok, but I see a little contradiction here. Either one doesn’t give students hints as to what to look for when reviewing, or students get hints and will be able to potentially compensate for what they might have missed while working on the assignment when doing reviews. I.e. they’d have a chance to do a good review but get weaker marks for their assignment, which, still, could make sense. The question is, what is of most interest to you when conducting this experiment?

    Interesting stuff!

  10. Chris

    I assume you are assigning a subset of random selections from the pool of solutions for review.
    If so what order are you presenting them in?

    It would be interesting to break your test subjects into three groups based on the initial TA markings then present the solutions in the following orders for review:
    * Random
    * Decreasing TA marks
    * Increasing TA marks

    Then ask the students if they feel they were able to learn by reviewing the solutions.

    I would be interested to see how the *quality* of the solutions reviewed by the student will help them learn. I could see how reviewing only solutions that the TAs scored with failing grades could be detrimental and a waste of time.

    In first year engineering programming courses – huge volume of submitted assignments, not a lot of good ones, perhaps lots of plagiarism – we had the opportunity to look at the suggested solution (used for marking) in tutorials. The professor would simply comment on areas of common weaknesses or strengths of the class as a whole if the assignment was reviewed at all.

    How are you going to provide experience and guidance?
    For example, a student may not be aware that using `strncpy()` is greater than `strcpy()` wrt security.
    How could you expect them to apply that knowledge to their reviews if they couldn’t apply it in their solutions as @Severin has mentioned?

    There seems to be a lot of knowledge rolling into an overall scalar mark.
    Perhaps you would be more interested in knowing the students thoughts and method of arriving at the mark. If that is the case, allow students to annotate a line with +/- marks (kind of like social network voting – ie thumbs up/down) with the option of adding textual comments. Then sum them all up for a final result. Two 6/10’s may have the same final mark, but could have different paths of arriving there. Of course this is much more complex and would be overkill if you are only interested in the deviation from the TA’s grade.

    It would be neat then to email back the student their assignment with their classmates +/- score for each line and any comments. This has risk involved too since student generated comments could be wrong, misleading, vulgar.

    By attributing comments to classmates instead of anonymously (privacy issue) you may be able to add accountability to such a system.

  11. Chris Siebenmann

    I think that if you do the code review straight, as proposed, you are
    effectively testing for two things at once; you’re testing both the
    ability of students to do good code reviews and their ability to fully
    understand the dark corners of the assignment. If I was a student, I
    would feel kind of grumpy about effectively getting hit twice if I
    didn’t fully understand the assignment; I’d lose marks on my solution
    and then I’d lose more marks on bad code reviews (because I would miss
    problems that the TAs would see).

    (I agree that having students miss important things about the assignment
    even in code review means that there is a problem, but that information
    seems more like something for the instructor to know than something to
    cost the students yet more marks.)

    My suggestion is that every student doing reviews should get a cheat
    sheet of ‘common mistakes to look for’. This shouldn’t cover everything
    and wouldn’t reduce code review to a mechanical process, but hopefully
    would get much closer to only measuring how well the students can review
    code and see things in it. The cheat sheet might be drawn up by the TAs
    after marking the assignments or it might be prepared in advance based
    on what the instructor is expecting people to have problems with.

    Also, it strikes me that it might be interesting to have students review
    their own assignment at the end of the review process. Having gone
    through a series of code reviews of other people’s code, have they
    learned enough to see the flaws in their own code? (My naive feeling
    is that this is not gameable, except to the extent that if you submit
    a terrible assignment you can at least get some marks back for being
    able to admit how terrible it is.)

  12. Mike

    @Severin:

    Good point, re: the time to ponder an assignment. Hadn’t considered that.

    Not sure how I could account for that – unless I use a real-world assignment, and my participants are students from a course that recently gave it (ie: they’re already familiar with the assignment).

    > Ok, but I see a little contradiction here. Either one doesn’t give students hints
    > as to what to look for when reviewing, or students get hints and will be able to
    > potentially compensate for what they might have missed while working on the assignment
    > when doing reviews. I.e. they’d have a chance to do a good review but get weaker
    > marks for their assignment, which, still, could make sense. The question is,
    > what is of most interest to you when conducting this experiment?

    Another good question.

    What I’m most interested in, is: does this kind of peer-grading cause students to learn something measurably useful?

    Thanks again for the input!

  13. Mike

    @Chris:

    > It would be interesting to break your test subjects into three groups based on the
    > initial TA markings then present the solutions in the following orders for review:
    > * Random
    > * Decreasing TA marks
    > * Increasing TA marks

    I agree, that would be interesting. 🙂 IE – what are the learning effects (if any) of viewing
    progressively better code? Worse code?

    > I could see how reviewing only solutions that the TAs scored with failing grades
    > could be detrimental and a waste of time.

    I agree.

    > How are you going to provide experience and guidance? For example, a student may not be aware
    > that using `strncpy()` is greater than `strcpy()` wrt security. How could you expect them
    > to apply that knowledge to their reviews if they couldn’t apply it in their solutions
    > as @Severin has mentioned?

    Another good question. Keep your eyes peeled for another blog post – I may have come up with
    a solution…

    > There seems to be a lot of knowledge rolling into an overall scalar mark. Perhaps you would be
    > more interested in knowing the students thoughts and method of arriving at the mark.

    I probably didn’t make it explicit, but in my head, the grading was done against a rubric. So
    instead of x out of 10, it’d be series of marks from 0 to 4 based on various marking criteria.
    At least then, we have some more granularity, and insight into *why* certain marks were given.

    > It would be neat then to email back the student their assignment with their classmates +/-
    > score for each line and any comments. This has risk involved too since student generated
    > comments could be wrong, misleading, vulgar.

    Yep. So far, I have skirted around the question of whether or not to actually *show* the reviews
    to the original author. It really depends, I suppose, on how useful the feedback would be.

    Thanks for the feedback!

  14. Mike

    @Chris Siebenmann:

    > I think that if you do the code review straight, as proposed, you are
    > effectively testing for two things at once; you’re testing both the
    > ability of students to do good code reviews and their ability to fully
    > understand the dark corners of the assignment. If I was a student, I
    > would feel kind of grumpy about effectively getting hit twice if I
    > didn’t fully understand the assignment; I’d lose marks on my solution
    > and then I’d lose more marks on bad code reviews (because I would miss
    > problems that the TAs would see).

    Well put, and a very good point.

    > My suggestion is that every student doing reviews should get a cheat
    > sheet of ‘common mistakes to look for’. This shouldn’t cover everything
    > and wouldn’t reduce code review to a mechanical process, but hopefully
    > would get much closer to only measuring how well the students can review
    > code and see things in it. The cheat sheet might be drawn up by the TAs
    > after marking the assignments or it might be prepared in advance based
    > on what the instructor is expecting people to have problems with.

    Instead of a pre-built solution, would a standard marking rubric be enough? The
    rubric criteria would give, essentially, a “checklist” of things to watch out for.
    Contrived example: code has doc-strings for each class and method, over most,
    over some, over few, none, etc…

    > Also, it strikes me that it might be interesting to have students review
    > their own assignment at the end of the review process. Having gone
    > through a series of code reviews of other people’s code, have they
    > learned enough to see the flaws in their own code? (My naive feeling
    > is that this is not gameable, except to the extent that if you submit
    > a terrible assignment you can at least get some marks back for being
    > able to admit how terrible it is.)

    This is a *very* interesting idea. Thank you! Keep your eyes peeled for another
    blog post about this!

  15. Chris Siebenmann

    @Mike:

    > Instead of a pre-built solution, would a standard marking rubric be
    > enough? The rubric criteria would give, essentially, a “checklist”
    > of things to watch out for. Contrived example: code has doc-strings
    > for each class and method, over most, over some, over few, none,
    > etc…

    I think that students should get this sort of standard rubric (to the
    extent that they didn’t get it at the start of class), but I think it’s
    not enough. The kind of cheat sheet I’m thinking of is not covering
    style or basics of well written code in general, it’s covering what I
    called the dark corners of the assignment: the common subtle coding
    mistakes that people make. For example, overlooking some particular race
    condition in a concurrent program, or not handling out of range input,
    or the like. These are always going to be specific to the assignment,
    and you may not be able to figure them out in advance.

    (Here I am assuming that the TAs are marking the assignment for more
    than just code style, that they’re also marking for correct, complete,
    and well written code.)

    Also, this is probably obvious: if the assignment is run through
    automated tests before being marked by the TAs, the code reviewers
    should see the test results if the TAs do. Various failed tests are
    quite likely a red flag to look in specific areas for code flaws (or
    just knowledge that certain flaws have to exist), so both sides need the
    same starting information.

    Re: my idea of students reviewing their own assignments. In retrospect,
    this is probably not workable for your experiment (unless you can sign
    up a chunk of an eager class to do this work). However, it might work to
    create a ringer assignment, one with specific subtle flaws, and then see
    if student reviewers spot more of the flaws if they see the ringer at
    the end of their code review then if it’s the first assignment that they
    review. (Getting enough subjects for statistically valid results might
    be tricky, since there are so many other variables in play here.)

  16. Chris Siebenmann

    Some thoughts on the in-class process: I think it should be possible to appeal your review grade if you either believe that you found flaws in the code that the TAs did not (and so marked it lower than they did) or that the code is not as flawed as the TAs think (so you marked it higher).

    To avoid lots of drama, if you find additional flaws that the TAs did not,
    the original person’s assignment mark is *not* lowered (you just get bonus
    credit for your code review). However, if you convince people that the code
    is not as flawed as the TAs think the original person’s assignment mark *is*
    raised; however, other people who agreed with the TAs in their code review
    get full credit (since one can hardly blame people for not being better
    than the TAs). Both of these should be prominently mentioned in the appeal
    process. If you do code review on your own assignment this should apply
    to that too, just as with other people’s assignments. Yes, you can game
    this last one a bit, but I’d rather encourage people to be honestly harsh
    on their own code even at the expense of getting one past the TAs.

    (This implies that the assignment and/or the code review guidelines should
    clearly document things that the submitted programs do not need to do, so
    that people don’t spend all day finding obscure things that the code they’re
    reviewing doesn’t handle.)

    This should apply primarily to code flaws, not to stylistic issues and
    basic correctness (enough comments, tests exist, docstrings are sensible,
    etc).

  17. Alexei

    I think the proposal misses the most important point of code reviews – the fact that they result in better code.

    Why not do the following:

    For each assignment have two problems – half the students have to solve one, the other half have to solve the other.
    Have two deadlines. Deadline 1 – submit your assignment solution to 5 random reviewers (that were selected for you by the system).
    Deadline 2 – incorporate feedback from review comments, modify your code, and submit assignment for grading by TA, as well as rating how useful each of the 5 reviews was to improving your code.

    (Obviously, the students reviewing your code would be working on a different assignment problem and vica versa.)

    This way, the person getting the reviews can rate their usefulness (which is much better than rating “how close is this to the TA’s grade”).

  18. Mike

    @Alexei:

    Thanks for the input!

    I’m concerned that your approach only captures how useful students *think* reviews are. I’m also interested in how useful the reviews *actually* are to the students.

    To that end, my experiment has undergone some serious redesign over the last few days. I’ll be posting about it soon!

    Cheers,

    -Mike

Comments are closed.