It’s finally time.
Here’s the draw: (click here if you can’t see the video)
Congratulations to the winner!
My experiment makes a little bit of an assumption – and it’s the same assumption most teachers probably make before they hand back work. We assume that the work has been graded correctly and objectively.
The rubric that I provided to my graders was supposed to help sort out all of this objectivity business. It was supposed to boil down all of the subjectivity into a nice, discrete, quantitative value.
But I’m a careful guy, and I like back-ups. That’s why I had 2 graders do my grading. Both graders worked in isolation on the same submissions, with the same rubric.
So, did it work? How did the grades match up? Did my graders tend to agree?
Sounds like it’s time for some data analysis!
The columns are concerned with the graders marks for each criterion. The first columns, Grader 1 – Average and Grader 2 – Average, simply show the average mark given for each criteria for each grader.
Number of Agreements shows the number of times the marks between both graders matched for that criterion. Similarly, Number of Disagreements shows how many times they didn’t match. Agreement Percentage just converts those two values into a single percentage for agreement.
Average Disagreement Magnitude takes every instance where there was a disagreement, and averages the magnitude of the disagreement (a reminder: the magnitude here is the absolute value of the difference).
Finally, I should point out that these tables can be sorted by clicking on the headers. This will probably make your interpretation of the data a bit easier.
So, if we’re clear on that, then let’s take a look at those tables…
|Grader 1 – Average||Grader 2 – Average||Number of Agreements||Number of Disagreements||Agreement Percentage||Average Disagreement Magnitude|
|Design of Flight Constructor||3.43||3.93||22||8||73.33||1.88|
|Design of __str__||2.4||3.4||10||20||33.33||1.6|
|Design of add_passenger||3.53||3.87||23||7||76.67||1.43|
|Design of heaviest_passenger||2.17||3.1||11||19||36.67||1.68|
|Design of lightest_passenger||2||2.83||12||18||40||1.61|
|Grader 1 – Average||Grader 2 – Average||Number of Agreements||Number of Disagreements||Agreement Percentage||Average Disagreement Magnitude|
|Design of Deck Constructor||3||3.77||18||12||60||1.92|
|Design of __str__||2.33||3.67||14||16||46.67||2.5|
|Design of deal||2.53||3.7||17||13||56.67||2.69|
|Design of shuffle||3||3.47||23||7||76.67||2|
|Design of cut||2.17||2.9||14||16||46.67||1.5|
It only happened once, on the “add_passenger” correctness criterion of the Flights and Passengers assignments. If you sort the tables by “Number of Agreements” (or Number of Disagreements), you’ll see what I mean.
In fact, there are only a handful of cases (4, by my count), where this isn’t true:
Sort the tables by Number of Disagreements descending, and take a look down the left-hand side.
There are 14 criteria in total for each assignment. If you’ve sorted the tables like I’ve asked, the top 7 criteria of each assignment are:
Of those 14, 9 have to do with design or style. It’s also worth noting that Doctrings and the correctness of the __str__ methods are in there too.
Total number of disagreements for Flights and Passengers: 136 (avg: 9.71 per criterion)
Total number of disagreements for Decks and Cards: 161 (avg: 11.5 per criterion)
From the very beginning, when I contacted / hired my Graders, I was very hands-off. Each Grader was given the assignment specifications and rubrics ahead of time to look over, and then a single meeting to ask questions. After that, I just handed them manila envelopes filled with submissions for them to mark.
Having spoken with some of the undergraduate instructors here in the department, I know that this isn’t usually how grading is done.
Usually, the instructor will have a big grading meeting with their TAs. They’ll all work through a few submissions, and the TAs will be free to ask for a marking opinion from the instructor.
By being hands-off, I didn’t give my Graders the same level of guidance that they may have been used to. I did, however, tell them that they were free to e-mail me or come up to me if they had any questions during their marking.
The hands-off thing was a conscious choice by Greg and myself. We didn’t want me to bias the marking results, since I would know which submissions would be from the treatment group, and which ones would be from control.
Anyhow, the results from above have driven me to conclude that if you just hand your graders the assignments and the rubrics, and say “go”, you run the risk of seeing dramatic differences in grading from each Grader. From a student’s perspective, this means that it’s possible to be marked by “the good Grader”, or “the bad Grader”.
I’m not sure if a marking-meeting like I described would mitigate this difference in grading. I hypothesize that it would, but that’s an experiment for another day.
If you sort the Decks and Cards table by Number of Disagreements, you’ll find that the criterion that my Graders disagreed most on was the correctness of the “deal” method. Out of 30 submissions, both Graders disagreed on that particular criterion 21 times (70%).
It’s a little strange to see that criterion all the way at the top there. As I mentioned earlier, most of the disagreements tended to be concerning design and style.
So what happened?
Well, let’s take a look at some examples.
The following is the deal method from participant #013:
def deal(self, num_to_deal): i = 0 while i < num_to_deal: print self.deck.pop(0) i += 1
Grader 1 gave this method a 1 for correctness, where Grader 2 gave this method a 4.
That’s a big disagreement. And remember, a 1 on this criterion means:
Barely meets assignment specifications. Severe problems throughout.
I think I might have to go with Grader 2 on this one. Personally, I wouldn’t use a while-loop here – but that falls under the design criterion, and shouldn’t impact the correctness of the method. I’ve tried the code out. It works to spec. It deals from the top of the deck, just like it’s supposed to. Sure, there are some edge cases missed here (what is the Deck is empty? What if we’re asked to deal more than the number of cards left? What if we’re asked to deal a negative number of cards? etc)… but the method seems to deliver the basics.
Not sure what Grader 1 saw here. Hmph.
The following is the deal method from participant #023:
def deal(self, num_to_deal): res =  for i in range(0, num_to_deal): res.append(self.cards.pop(0))
Grader 1 gave this method a 0 for correctness. Grader 2 gave it a 3.
I see two major problems with this method. The first one is that it doesn’t print out the cards that are being dealt off: instead, it stores them in a list. Secondly, that list is just tossed out once the method exits, and nothing is returned.
A “0” for correctness simply means Unimplemented, which isn’t exactly true: this method has been implemented, and has the right interface.
But it doesn’t conform to the specification whatsoever. I would give this a 1.
So, in this case, I’d side more (but not agree) with Grader 1.
This is the deal method from participant #025:
def deal(self, num_to_deal): num_cards_in_deck = len(self.cards) try: num_to_deal = int(num_to_deal) if num_to_deal > num_cards_in_deck: print "Cannot deal more than " + num_cards_in_deck + " cards\n" i = 0 while i < num_to_deal: print str(self.cards[i]) i += 1 self.cards = self.cards[num_to_deal:] except: print "Error using deal\n"
Grader 1 also gave this method a 1 for correctness, where Grader 2 gave a 4.
The method is pretty awkward from a design perspective, but it seems to behave as it should – it deals the provided number of cards off of the top of the deck and prints them out.
It also catches some edge-cases: num_to_deal is converted to an int, and we check to ensure that num_to_deal is less than or equal to the number of cards left in the deck.
Again, I’ll have to side more with Grader 2 here.
This is the deal method from participant #030:
def deal(self, num_to_deal): '''''' i = 0 while i <= num_to_deal: print self.cards del self.cards
Grader 1 gave this a 1. Grader 2 gave this a 4.
Well, right off the bat, there’s a major problem: this while-loop never exists. The while-loop is waiting for the value i to become greater than num_to_deal…but it never can, because i is defined as 0, and never incremented.
So this method doesn’t even come close to satisfying the spec. The description for a “1” on this criterion is:
Barely meets assignment specifications. Severe problems throughout.
I’d have to side with Grader 1 on this one. The only thing this method delivers in accordance with the spec is the right interface. That’s about it.
I received an e-mail from Grader 2 about the deal method. I’ve paraphrased it here:
If the students create the list of cards in a typical way, for suit in CARD_SUITS; for rank in CARD_RANKS, and then print using something like:
for card in self.cards
print str(card) + “\n”
Then for deal, if they pick the cards to deal using pop() somehow, like:
for i in range(num_to_deal):
Aren’t they dealing from the bottom
My answer was “yes, they are, and that’s a correctness problem”. In my assignment specification, I was intentionally vague about the internal collection of the cards – I let the participant figure that all out. All that mattered was that the model made sense, and followed the rules.
So if I print my deck, and it prints:
Q of Hearts A of Spades 7 of Clubs
Then deal(1) should print:
Q of Hearts
regardless of the internal organization.
Anyhow, only Grader 2 asked for clarification on this, and I thought this might be the reason for all of the disagreement on the deal method.
Looking at all of the disagreements on the deal methods, it looks like 7 out of the 20 can be accounted for because students were unintentionally dealing from the bottom of the deck, and only Grader 2 caught it.
Subtracting the “dealing from the bottom” disagreements from the total leaves us with 13, which puts it more in line with some of the other correctness criteria.
So I’d have to say that, yes, the “dealing from the bottom” problem is what made the Graders disagree so much on this criterion: only 1 Grader realized that it was a problem while they were marking. Again, I think this was symptomatic of my hands-off approach to this part of the experiment.
My graders disagreed. A lot. And a good chunk of those disagreements were about style and design. Some of these disagreements might be attributable to my hands-off approach to the grading portion of the experiment. Some of them seem to be questionable calls from the Graders themselves.
Part of my experiment was interested in determining how closely peer grades from students can approximate grades from TAs. Since my TAs have trouble agreeing amongst themselves, I’m not sure how that part of the analysis is going to play out.
I hope the rest of my experiment is unaffected by their disagreement.
Do my numbers make no sense? Have I contradicted myself? Have I missed something critical? Are there unanswered questions here that I might be able to answer? I’d love to know. Please comment!
If you’ve read about my experiment, you’ll know that there were two Python programming assignments that my participants worked on, and a rubric for each assignment.
There were also 5 mock-up submissions for each assignment that I had my participants grade. I developed these mock-ups, after a few consultations with some of our undergraduate instructors, in order to get a sense of the kind of code that undergraduate programmers tend to submit.
I’ve decided to post these materials to this blog, in case somebody wants to give them a once over. Just thought I’d open my science up a little bit.
So here they are:
Peruse at your leisure.
Sometimes I play a little fast and loose with my English. If there’s anything that my Natural Language Processing course taught me last year, it’s that I really don’t have a firm grasp on the formal rules of grammar.
The reason I mention this is because of the word “peer”. The plural of peer is peers. And the plural possessive of peer is peers’. With the apostrophe.
I didn’t know that a half hour ago. Emily told me, and she’s a titan when it comes to the English language.
The graphs below were created a few days ago, before I knew this rule. So they use peer’s instead of peers’. I dun goofed. And I’m too lazy to change them (and I don’t want to use OpenOffice Draw more than I have to).
I just wanted to let you Internet people know that I’ve realized this, since their are so many lot of grammer nazi’s out they’re on the webz.
Now, with that out of the way, where were we?
If you read my experiment recap, then you know that my treatment group wrote a questionnaire after they were done all of their assignment writing.
The questionnaire was used to get an impression of how participants felt about their peer reviewing experience.
Just to remind you, my participants were marking mock-ups that I created for an assignment that they had just written. There were 5 mock-ups per assignment, so 10 mock-ups in total. Some of my mock-ups were very concise. Others were intentionally horrible and hard to read. Some were extremely vigilant in their documentation. Others were laconic. I tried to capture a nice spectrum of first year work. None of my participants knew that I had mocked the assignments up.
The questionnaire made the following statements, and asked students to agree on a scale from 1 to 5, where 1 was Strongly Disagree and 5 was Strongly Agree:
For questions 2, 5, 7, 8, and 10, participants were asked to expand with a written comment if they answered 3 or above.
Of the 30 participants in my study, 15 were in my treatment group, and therefore only 15 people filled out this questionnaire.
The graphs are histograms – that means that the higher the bar is, the more participants answered the question that way.
So, without further ado, here are the results…
While there’s more weight on the positive side, opinion seems pretty split on this one. It might really depend on what kind of social / working group you have in your programming classes.
It might also depend on how adherent students are to the rules, since sharing code with your peers is a bit of a no-no according to the UofT Computer Science rules of conduct. Most programming courses have something like the following on their syllabus:
Never look at another student’s assignment solution, whether it is on paper or on the computer
screen. Never show another student your assignment solution. This applies to all drafts of a solution
and to incomplete solutions.
Of course, this only applies before an assignment is due. Once the due date has passed, it’s OK to look at one another’s code…but how many students do that?
Anyhow, looking at the graph, I don’t think we got too much out of that one. Let’s move on.
Well, that’s a nice strong signal. Clearly, there’s more weight on the positive side. So my participants seem to understand that grading the code is teaching them something. That’s good.
And now for an interesting question: is there any relationship between the amount of programming experience of the participant, and how they answered this question? Good question. Before the experiment began, all participants filled out a brief questionnaire. The questionnaire asked them to provide, in months, how much time they’ve spent in either a programming intensive course, or a programming job. So that’s my fuzzy measure for programming experience.
The result was surprising.
Number of participants: 7
Maximum number of months: 36
Minimum number of months: 4
Average number of months: 16
Number of participants: 1
Number of months: 16
Number of participants: 4
Maximum number of months: 16
Minimum number of months: 8
Average number of months: 13
Number of participants: 1
Average number of months: 5
Number of participants: 1
Average number of months: 16
So there’s no evidence here that participants with more experience felt they learned less from the peer grading.
This was one of those questions where participants were asked to expand if they answered 3 or above. Here are some juicy morsels:
I learned some tricks and shortcuts of coding that make the solution more elegant and sometimes shorter.
…it showed me how hard some code are to read since I do not know what is in the programmer’s head.
I learned how different their coding style are compared to mine, as well as their reasoning to the assignment.
l learned about how other people think differently on same question and their programming styles can be different very much.
one of the codes I marked is very elegant and clear. It uses very different path from others. I really enjoyed that code. I think good codes from peers help us learn more.
I didn’t know about the random.shuffle method. I also didn’t know that it would have been better to use Exceptions which I don’t really know.
The different design or thinking towards the same question’s solution, and other ways to interpret a matter.
Other people can have very convoluted solutions…
Different ways of solving a problem
A few Python shortcuts, especially involving string manipulation. As well, I learned how to efficiently shuffle a list.
algorithm (ways of thinking), different ways of doing the same thing
Sometimes a few little tricks or styles that I had forgotten about. Also just a few different ways to go about solving the problem.
So what conclusions can I draw from this?
It looks like, regardless of experience, students seem to think peer grading teaches them something – even if it’s just a different design, or an approach to a problem.
Another clear signal in the “strongly agree” camp. This one is kind of a no-brainer though – seeing work by others certainly gives us a sense of how our own work rates in comparison. We do this kind of comparison all the time.
Anyhow, my participants seem to agree with that.
Again, a lot of agreement there. Students are curious to know what their peers think of their work. They care what their peers think. This is good. This is important.
Hm. More of a mixed reaction here. There’s more weight on the “strongly agree” side, but not a whole lot more.
This is interesting though. If I find that my treatment group does perform better on their second assignment, is it possible that their improvement isn’t from the grading, but rather from their intense study of the rubric?
So, depending on whether or not there’s an improvement, my critics could say I might have a wee case of confounding factor syndrome, here.
And I would agree with them. However, I would also point out that if there was an improvement in the treatment group, it wouldn’t matter what the actual source of the learning was – the peer grading (along with the rubric) caused an improvement. And that’s fine. That’s an OK result.
Of course, this is all theoretical until I find out if there was an improvement in the treatment group grades. Stay tuned for that.
Anyhow, this was another one of those questions where I asked for elaboration for answers 3 and up. Here’s what the participants had to say:
I would have checked for exceptions (and know what exceptions to check). I would have put more comments and docstrings into my code. I would have named my variables more reasonably.
I would’ve wrote out documentation. (ie. docstrings) Though I found that internal commenting wasn’t necessary.
i’ll add more comments to my code and maybe some more exceptions.
Added comments and docstrings.
Code’s design, style, clearness, readability and docstrings.
Made more effort to write useful docstrings and comments
I would’ve included things that I wouldn’t have included if I was coding for myself (such as comments and docstrings).
Added more documentation (I forget what it’s called but it’s when you surround the comments with “” ”’ “”)
Written more docstrings and comments (even though I think the code was simple enough and the method names self-explanatory enough that the code didn’t need more than one or two terse docstrings).
I forgot about docstrings and commenting my code
So it sounds like evaluation on documentation wasn’t clear enough in my assignment specification. There’s also some indication that participants thought that documentation wasn’t necessary if the code is simple enough. With respect to Docstrings, I’d have to disagree, since Docstrings are overwhelmingly useful for generating and compiling documentation. That’s just my own personal feelings on the matter, though.
Note: this is not to be confused with “I enjoyed grading my peers’ work”, which is the next question.
Mostly agreement here. So that’s interesting – participants enjoyed the simple act of seeing and reading code written by their peers.
It looks like, in general, students don’t really enjoy grading their peers’ code. Clearly, it’s not a universal opinion – you can see there’s some disagreement in the graph. Still, the trend seems to go towards the “strongly disagree” camp.
That’s a very useful finding. There’s nothing worse than sweating your butt off to design and construct a new task for students, only to find out that they hate doing it. We may have caught this early.
And I don’t actually find this that surprising: code review isn’t exactly a pleasurable experience. The benefits are certainly nice, but code review is a bit like flossing… it just seems to slow the morning routine down, regardless of the benefits.
Here’s what some participants had to say about their answers:
Because I like to compare my thoughts and other people’s thoughts.
well, some of the codes are really hard to read. But I did learn something from the grading. And letting students grade the codes is more fair.
I got to see where I went wrong and saw more creative/efficient solutions which will give me ideas for future assignments. But otherwise it was really boring.
So that I can learn from my peer’s thinking which gives me more diversity of coding and problem-solving.
Sometimes you see other student’s styles of coding/commenting/documenting and it helps you write better code. Sometimes you learn things that you didn’t know before. Sometimes it’s funny to see how other people code.
It was interesting to see their ideas, although sometimes painful to see their style.
not so much the grading part, but analyzing/looking at the different ways of coding the same thing
It gave me a rare prospective to see how other people with a similar educational background write their code.
Makes you think more critically about the overall presentation of your code. You ask yourself : “What would someone think of my code if they were doing this? Would I get a good mark?”
This one is more or less split right down the middle, with a little more weight on the agree side.
Again, participants who answered 3 or above were asked to elaborate. Here are some comments:
The hardest part was trying to trace through messy code in order to figure out if it actually works.
Emotionally, I know what the student is doing but I have to give bad marks for comments or style which makes me feel bad. Sometimes it is hard to distinguish the mark whether it is 3 or 4. The time was critical (did not have time to finish all papers) which might result in giving the wrong mark. I kept comparing marks and papers so I could get almost the fairest result between all students. It is hard to mark visually, i.e. not testing the code. Some codes are hard to read which make it hard for marking and I can assume it is wrong but it actually works.
Giving bad marks are hard! Reading bad code is painful! It wasn’t fun! 🙁
It just became really tedious trying to understand people’s code.
To test and verify their code is hard sometimes as their method of solving a problem might be complicated. I need to think very carefully and test their code progressively.
The rubric felt a little too strict. Sometimes a peer’s code had small difficulties that could easily be overcome, but would be labeled as very poor. Also, the rubric wasn’t clear enough, especially on the error handling portions and style. There could be many ways of coding for example the __str__ functions (using concatenation versus using format eg. ‘ %s’ % string as opposed to using + str(string) +)
I just found it hard to read other’s code because I already have a set idea of how to solve the problems. I did not see how the solutions of my peers would’ve improved my own solutions, so I did not find value in this.
Reading through each line of code and trying to figure out what it does
Reading through convoluted, circuitous code to determine correctness.
Not every case is clear-cut, and sometimes it’s hard to decide which score to give.
Being harsh and honest. I guess it’s good not to ever meet the people who wrote the codes (unlike TAs) because they aren’t there to defend themselves. Saves some headaches 🙂
Ok, more or less full agreement here. At least, no disagreement. But also no full agreement. It’s sort of a lethargic “meh” with a flaccid thumbs up.
The conclusion? My participants felt that, more or less, their grading was probably fair. I guess.
Now this one…
This one is tricky, because I might have to toss it out. Each one of my participants was told flat out that other participants in the study may or may not see their code. This is true, since the graders are also participants in the study.
However, I did not outright tell them that other participants would be grading their code for the first assignment. So I think this question may have come as a surprise to them.
That was an oversight on my part. I screwed up. I’m human.
The two lone participants who answered 3 or above wrote:
Making the docstring comments more clear, simplifying my design as possible, writing in a better style.
Added a bit more comments to explain my code in case peers don’t understand.
Anyhow, so those are my initial findings. If you have any questions about my data, or ideas on how I could analyze it, please let me know. I’m all ears.
Before I start diving into results, I’m just going to recap my experiment so we’re all up to speed.
I’ll try to keep it short, sweet, and punchy – but remember, this is a couple of months of work right here.
Ready? Here we go.
Code review is like the software industry equivalent of a taste test. A developer makes a change to a piece of software, puts that change up for review, and a few reviewers take a look at that change to make sure it’s up to snuff. If some issues are found during the course of the review, the developer can go back and make revisions. Once the reviewers give it the thumbs up, the change is put into the software.
That’s an oversimplified description of code review, but it’ll do for now.
What’s important is to know that it works. Jason Cohen showed that code review reduces the number of defects that enter the final software product. That’s great!
But there are some other cool advantages to doing code review as well.
That last one is important. Code review sounds like an excellent teaching tool.
So why isn’t code review part of the standard undergraduate computer science education? Greg and I hypothesized that the reason that code review isn’t taught is because we don’t know how to teach it.
I’ll quote myself:
What if peer code review isn’t taught in undergraduate courses because we just don’t know how to teach it? We don’t know how to fit it in to a curriculum that’s already packed to the brim. We don’t know how to get students to take it seriously. We don’t know if there’s pedagogical value, let alone how to show such value to the students.
Inspired by work by Joordens and Pare, Greg and I developed an approach to teaching code review that integrates itself nicely into the current curriculum.
Here’s the basic idea:
Suppose we have a computer programming class. Also suppose that after each assignment, each student is randomly presented with anonymized assignment submissions from some of their peers. Students will then be asked to anonymously peer grade these assignment submissions.
Now, before you go howling your head off about the inadequacy / incompetence of student markers, or the PeerScholar debacle, read this next paragraph, because there’s a twist.
The assignment submissions will still be marked by TA’s as usual. The grades that a student receives from her peers will not directly affect her mark. Instead, the student is graded based on how well they graded their peers. The peer reviews that a student completes will be compared with the grades that the TA’s delivered. The closer a student is to the TA, the better the mark they get on their “peer grading” component (which is distinct from the mark they receive for their programming assignment).
Now, granted, the idea still needs some fleshing out, but already, we’ve got some questions that need answering:
So those were my questions.
Here’s the design of the experiment in a nutshell:
I have a treatment group, and a control group. Both groups are composed of undergraduate students. After writing a short pre-experiment questionnaire, participants in both groups will have half an hour to work on a short programming assignment. The treatment group will then have another half an hour to peer grade some submissions for the assignment they just wrote. The submissions that they mark will be mocked up by me, and will be the same for each participant in the treatment group. The control group will not perform any grading – instead, they will do an unrelated vocabulary exercise for the same amount of time. Then, participants in either group will have another half an hour to work on the second short programming assignment. Participants in my treatment group will write a short post-experiment questionnaire to get their impressions on their peer grading experience. Then the participants are released.
Here’s a picture to help you visualize what you just read.
So now I’ve got two piles of submissions – one for each assignment, 60 submissions in total. I add my mock-ups to each pile. That means 35 submissions in each pile, and 70 submissions in total.
I assign ID numbers to each submission, shuffle them up, and hand them off to some graduate level TA’s that I hired. The TA’s will grade each assignment using the same marking rubric that the treatment group used to peer grade. They will not know if they are grading a treatment group submission, a control group submission, or a mock-up.
After the grading is completed, I remove the mock-ups, and pair up submissions in both piles based on who wrote it. So now I’ve got 30 pairs of submissions: one for each student. I then ask my graders to look at each pair, knowing that they’re both written by the same student, and to choose which one they think is better coded, and to rate and describe the difference (if any) between the two. This is an attempt to catch possible improvements in the treatment group’s code that might not be captured in the marking rubric.
So everything you’ve just read is what I’ve just finished doing.
Once the submissions are marked, I’ll analyze the marks for the following:
Ok, so that’s where I’m at. Stay tuned for results.