Monthly Archives: August 2010

Some More Results: Did the Graders Agree? – Part 2

(Click here to read the first part of the story)

I’m just going to come right out and say it:  I’m no stats buff.

Actually, maybe that’s giving myself too much credit.  I barely scraped through my compulsory statistics course.  In my defense, the teaching was abysmal, and the class average was in the sewer the entire time.

So, unfortunately, I don’t have the statistical chops that a real scientist should.

But, today, I learned a new trick.

Pearson’s Correlation Co-efficient

Joorden’s and Pare gave me the idea while I was reviewing their paper for the Related Work section of my thesis.  They used it in order to inspect mark agreement between their expert markers.

In my last post on Grader agreement, I was looking at mark agreement at the equivalence level.  Pearson’s Correlation Co-efficient should (I think) let me inspect mark agreement at the “shape” level.

And by shape level, I mean this:  if Grader 1 gives a high mark for a participant, then Grader 2 gives a high mark.  If Grader 1 gives a low mark for the next participant, then Grader 2 gives a low mark.  These high and low marks might not be equal, but the basic shape of the thing is there.

And this page, with it’s useful table, tell me how I can tell if the correlation co-efficient that I find is significant.  Awesome.

At least, that’s my interpretation of Pearson’s Correlation Co-efficient.  Maybe I’ve got it wrong.  Please let me know if I do.

Anyhow, it can’t hurt to look at some more tables.  Let’s do that.

About these tables…

Like my previous post on graders, I’ve organized my data into two tables – one for each assignment.

Each table has a row for that assignments criteria.

Each table has two columns – the first is strictly to list the assignment criteria.  The second column gives the Pearson Correlation Co-efficient for each criterion.  The correlation measurement is between the marks that my two Graders gave on that criterion across all 30 submissions for that assignment.

I hope that makes sense.

Anyways, here goes…

Da-ta!

Decks and Cards Grader Correlation Table

[table id=8 /]

Flights and Passengers Grader Correlation Table

[table id=9 /]

What does this tell us?

Well, first off, remember that for each assignment, for each criterion, there were 30 submissions.

So N = 30.

In order to determine if the correlation co-efficients are significant, we look at this table, and find N – 2 down the left hand side:

28                       .306    .361    .423    .463

Those 4 values on the right are the critical values that we want to pass for significance.

Good news!  All of the correlation co-efficients fall within the range of [.306, .463].  So now, I’ll show you their significance by level:

p < 0.10

  • Design of __str__ in Decks and Cards assignment

p < 0.05

  • Design of deal method in Decks and Cards assignment

p < 0.02

  • Design of heaviest_passenger method in Flights and Passengers

p < 0.01

Decks and Cards
  • Design of Deck constructor
  • Style
  • Internal Comments
  • __str__ method correctness
  • deal method correctness
  • Deck constructor correctness
  • Docstrings
  • shuffle method correctness
  • Design of shuffle method
  • Design of cut method
  • cut method correctness
  • Error checking
Flights and Passengers
  • Design of __str__ method
  • Design of lightest_passenger method
  • Style
  • Design of Flight constructor
  • Internal comments
  • Design of add_passenger method
  • __str__ method correctness
  • Error checking
  • heaviest_passenger method correctness
  • Docstrings
  • lightest_passenger method correctness
  • Flight constructor correctness
  • add_passenger method correctness

Wow!

Correlation of Mark Totals

Joorden’s and Pare ran their correlation statistics on assignments that were marked on a scale from 1 to 10.  I can do the same type of analysis by simply running Pearson’s on the totals for each participant by each Grader.

Drum roll, please…

Decks and Cards

p(28) = 0.89, p < 0.01

Flights and Passengers

p(28) = 0.92, p < 0.01

Awesome!

Summary / Conclusion

I already showed before that my two Graders rarely agreed mark for mark, and that one Grader tended to give higher marks than the other.

The analysis with Pearson’s correlation co-efficient seems to suggest that, while there isn’t one-to-one agreement, there is certainly a significant correlation – with the majority of the criteria having a correlation with p < 0.01!

The total marks also show a very strong, significant, positive correlation.

Ok, so that’s the conclusion here:  the Graders marks do not match, but show moderate to high positive correlation to a significant degree.

How’s My Stats?

Did I screw up somewhere?  Am I making fallacious claims?  Let me know – post a comment!

Some More Results: Did the Graders Agree?

My experiment makes a little bit of an assumption – and it’s the same assumption most teachers probably make before they hand back work.  We assume that the work has been graded correctly and objectively.

The rubric that I provided to my graders was supposed to help sort out all of this objectivity business.  It was supposed to boil down all of the subjectivity into a nice, discrete, quantitative value.

But I’m a careful guy, and I like back-ups.  That’s why I had 2 graders do my grading.  Both graders worked in isolation on the same submissions, with the same rubric.

So, did it work?  How did the grades match up?  Did my graders tend to agree?

Sounds like it’s time for some data analysis!

About these tables…

I’m about to show you two tables of data – one table for each assignment.  The rows of the tables map to a single criterion on that assignments rubric.

The columns are concerned with the graders marks for each criterion.  The first columns, Grader 1 – Average and Grader 2 – Average, simply show the average mark given for each criteria for each grader.

Number of Agreements shows the number of times the marks between both graders matched for that criterion.  Similarly, Number of Disagreements shows how many times they didn’t match.  Agreement Percentage just converts those two values into a single percentage for agreement.

Average Disagreement Magnitude takes every instance where there was a disagreement, and averages the magnitude of the disagreement (a reminder:  the magnitude here is the absolute value of the difference).

Finally, I should point out that these tables can be sorted by clicking on the headers.  This will probably make your interpretation of the data a bit easier.

So, if we’re clear on that, then let’s take a look at those tables…

Flights and Passengers Grader Comparison

[table id=6 /]

Decks and Cards Grader Comparison

[table id=7 /]

Findings and Analysis

It is very rare for the graders to fully agree

It only happened once, on the “add_passenger” correctness criterion of the Flights and Passengers assignments.  If you sort the tables by “Number of Agreements” (or Number of Disagreements), you’ll see what I mean.

Grader 2 tended to give higher marks than Grader 1

In fact, there are only a handful of cases (4, by my count), where this isn’t true:

  1. The add_passenger correctness criterion on Flights and Passengers
  2. The internal comments criterion on Flights and Passengers
  3. The error checking criterion on Decks and Cards
  4. The internal comments criterion on Decks and Cards

The graders tended to disagree more often on design and style

Sort the tables by Number of Disagreements descending, and take a look down the left-hand side.

There are 14 criteria in total for each assignment.  If you’ve sorted the tables like I’ve asked, the top 7 criteria of each assignment are:

Flights and Passengers
  1. Style
  2. Design of __str__ method
  3. Design of heaviest_passenger method
  4. Design of lightest_passenger method
  5. Docstrings
  6. Correctness of __str__ method
  7. Design of Flight constructor
Decks and Cards
  1. Correctness of deal method
  2. Style
  3. Design of cut method
  4. Design of __str__ method
  5. Docstrings
  6. Design of deal method
  7. __str__

Of those 14, 9 have to do with design or style.  It’s also worth noting that Doctrings and the correctness of the __str__ methods are in there too.

There were slightly more disagreement in Decks and Cards than in Flights and Passengers

Total number of disagreements for Flights and Passengers:  136 (avg:  9.71 per criterion)

Total number of disagreements for Decks and Cards:  161 (avg:  11.5 per criterion)

Discussion

Being Hands-off

From the very beginning, when I contacted / hired my Graders, I was very hands-off.  Each Grader was given the assignment specifications and rubrics ahead of time to look over, and then a single meeting to ask questions.  After that, I just handed them manila envelopes filled with submissions for them to mark.

Having spoken with some of the undergraduate instructors here in the department, I know that this isn’t usually how grading is done.

Usually, the instructor will have a big grading meeting with their TAs.  They’ll all work through a few submissions, and the TAs will be free to ask for a marking opinion from the instructor.

By being hands-off, I didn’t give my Graders the same level of guidance that they may have been used to.  I did, however, tell them that they were free to e-mail me or come up to me if they had any questions during their marking.

The hands-off thing was a conscious choice by Greg and myself.  We didn’t want me to bias the marking results, since I would know which submissions would be from the treatment group, and which ones would be from control.

Anyhow, the results from above have driven me to conclude that if you just hand your graders the assignments and the rubrics, and say “go”, you run the risk of seeing dramatic differences in grading from each Grader.  From a student’s perspective, this means that it’s possible to be marked by “the good Grader”, or “the bad Grader”.

I’m not sure if a marking-meeting like I described would mitigate this difference in grading.  I hypothesize that it would, but that’s an experiment for another day.

Questionable Calls

If you sort the Decks and Cards table by Number of Disagreements, you’ll find that the criterion that my Graders disagreed most on was the correctness of the “deal” method.  Out of 30 submissions, both Graders disagreed on that particular criterion 21 times (70%).

It’s a little strange to see that criterion all the way at the top there.  As I mentioned earlier, most of the disagreements tended to be concerning design and style.

So what happened?

Well, let’s take a look at some examples.

Example #1

The following is the deal method from participant #013:

def deal(self, num_to_deal):
  i = 0
  while i < num_to_deal:
    print self.deck.pop(0)
    i += 1

Grader 1 gave this method a 1 for correctness, where Grader 2 gave this method a 4.

That’s a big disagreement.  And remember, a 1 on this criterion means:

Barely meets assignment specifications. Severe problems throughout.

I think I might have to go with Grader 2 on this one.  Personally, I wouldn’t use a while-loop here – but that falls under the design criterion, and shouldn’t impact the correctness of the method.  I’ve tried the code out.  It works to spec.  It deals from the top of the deck, just like it’s supposed to.  Sure, there are some edge cases missed here (what is the Deck is empty?  What if we’re asked to deal more than the number of cards left?  What if we’re asked to deal a negative number of cards?  etc)… but the method seems to deliver the basics.

Not sure what Grader 1 saw here.  Hmph.

Example #2

The following is the deal method from participant #023:

def deal(self, num_to_deal):
 res = []
 for i in range(0, num_to_deal):
   res.append(self.cards.pop(0))

Grader 1 gave this method a 0 for correctness.  Grader 2 gave it a 3.

I see two major problems with this method.  The first one is that it doesn’t print out the cards that are being dealt off:  instead, it stores them in a list.  Secondly, that list is just tossed out once the method exits, and nothing is returned.

A “0” for correctness simply means Unimplemented, which isn’t exactly true:  this method has been implemented, and has the right interface.

But it doesn’t conform to the specification whatsoever.  I would give this a 1.

So, in this case, I’d side more (but not agree) with Grader 1.

Example #3

This is the deal method from participant #025:

def deal(self, num_to_deal):
    num_cards_in_deck = len(self.cards)
    try:
        num_to_deal = int(num_to_deal)
        if num_to_deal > num_cards_in_deck:
            print "Cannot deal more than " + num_cards_in_deck + " cards\n"
        i = 0
        while i < num_to_deal:
            print str(self.cards[i])
            i += 1
        self.cards = self.cards[num_to_deal:]
    except:
        print "Error using deal\n"

Grader 1 also gave this method a 1 for correctness, where Grader 2 gave a 4.

The method is pretty awkward from a design perspective, but it seems to behave as it should – it deals the provided number of cards off of the top of the deck and prints them out.

It also catches some edge-cases:  num_to_deal is converted to an int, and we check to ensure that num_to_deal is less than or equal to the number of cards left in the deck.

Again, I’ll have to side more with Grader 2 here.

Example #4

This is the deal method from participant #030:

def deal(self, num_to_deal):
  ''''''
  i = 0
  while i <= num_to_deal:
    print self.cards[0]
    del self.cards[0]

Grader 1 gave this a 1.  Grader 2 gave this a 4.

Well, right off the bat, there’s a major problem:  this while-loop never exists.  The while-loop is waiting for the value i to become greater than num_to_deal…but it never can, because i is defined as 0, and never incremented.

So this method doesn’t even come close to satisfying the spec.  The description for a “1” on this criterion is:

Barely meets assignment specifications. Severe problems throughout.

I’d have to side with Grader 1 on this one.  The only thing this method delivers in accordance with the spec is the right interface.  That’s about it.

Dealing from the Bottom of the Deck

I received an e-mail from Grader 2 about the deal method.  I’ve paraphrased it here:

If the students create the list of cards in a typical way, for suit in CARD_SUITS; for rank in CARD_RANKS, and then print using something like:
for card in self.cards
print str(card) +  “\n”
Then for deal, if they pick the cards to deal using pop() somehow, like:
for i in range(num_to_deal):
print str(self.cards.pop())

Aren’t they dealing from the bottom

My answer was “yes, they are, and that’s a correctness problem”.  In my assignment specification, I was intentionally vague about the internal collection of the cards – I let the participant figure that all out.  All that mattered was that the model made sense, and followed the rules.

So if I print my deck, and it prints:

Q of Hearts
A of Spades
7 of Clubs

Then deal(1) should print:

Q of Hearts

regardless of the internal organization.

Anyhow, only Grader 2 asked for clarification on this, and I thought this might be the reason for all of the disagreement on the deal method.

Looking at all of the disagreements on the deal methods, it looks like 7 out of the 20 can be accounted for because students were unintentionally dealing from the bottom of the deck, and only Grader 2 caught it.

Subtracting the “dealing from the bottom” disagreements from the total leaves us with 13, which puts it more in line with some of the other correctness criteria.

So I’d have to say that, yes, the “dealing from the bottom” problem is what made the Graders disagree so much on this criterion:  only 1 Grader realized that it was a problem while they were marking.  Again, I think this was symptomatic of my hands-off approach to this part of the experiment.

In Summary

My graders disagreed.  A lot.  And a good chunk of those disagreements were about style and design.  Some of these disagreements might be attributable to my hands-off approach to the grading portion of the experiment.  Some of them seem to be questionable calls from the Graders themselves.

Part of my experiment was interested in determining how closely peer grades from students can approximate grades from TAs.  Since my TAs have trouble agreeing amongst themselves, I’m not sure how that part of the analysis is going to play out.

I hope the rest of my experiment is unaffected by their disagreement.

Stay tuned.

See anything?

Do my numbers make no sense?  Have I contradicted myself?  Have I missed something critical?  Are there unanswered questions here that I might be able to answer?  I’d love to know.  Please comment!

My Experiment Apparatus: The Assignments, Rubrics and Mock-Ups

If you’ve read about my experiment, you’ll know that there were two Python programming assignments that my participants worked on, and a rubric for each assignment.

There were also 5 mock-up submissions for each assignment that I had my participants grade.  I developed these mock-ups, after a few consultations with some of our undergraduate instructors, in order to get a sense of the kind of code that undergraduate programmers tend to submit.

I’ve decided to post these materials to this blog, in case somebody wants to give them a once over.  Just thought I’d open my science up a little bit.

So here they are:

Flights and Passengers

Cards and Decks

Peruse at your leisure.

Some Preliminary Results

But first, a confession…

Sometimes I play a little fast and loose with my English.  If there’s anything that my Natural Language Processing course taught me last year, it’s that I really don’t have a firm grasp on the formal rules of grammar.

The reason I mention this is because of the word “peer”.  The plural of peer is peers.  And the plural possessive of peer is peers’.  With the apostrophe.

I didn’t know that a half hour ago.  Emily told me, and she’s a titan when it comes to the English language.

The graphs below were created a few days ago, before I knew this rule.  So they use peer’s instead of peers’.  I dun goofed.  And I’m too lazy to change them (and I don’t want to use OpenOffice Draw more than I have to).

I just wanted to let you Internet people know that I’ve realized this, since their are so many lot of grammer nazi’s out they’re on the webz.

Now, with that out of the way, where were we?

The Post-Experiment Questionnaire

If you read my experiment recap, then you know that my treatment group wrote a questionnaire after they were done all of their assignment writing.

The questionnaire was used to get an impression of how participants felt about their peer reviewing experience.

A note on the peer reviewing experience

Just to remind you, my participants were marking mock-ups that I created for an assignment that they had just written.  There were 5 mock-ups per assignment, so 10 mock-ups in total.  Some of my mock-ups were very concise.  Others were intentionally horrible and hard to read.  Some were extremely vigilant in their documentation.  Others were laconic.  I tried to capture a nice spectrum of first year work. None of my participants knew that I had mocked the assignments up.

Anyhow, back to the questionnaire…

The questionnaire made the following statements, and asked students to agree on a scale from 1 to 5, where 1 was Strongly Disagree and 5 was Strongly Agree:

  1. It is unusual for me to see code written by my peers.
  2. Seeing my peer’s code taught me things I didn’t already know.
  3. Because I saw and graded my peer’s work, I believe I know more about the quality of my own work.
  4. I am interested in knowing how my peers graded me.
  5. I would have written the code for my first assignment differently if I had seen the rubric beforehand.
  6. During this experiment, I enjoyed seeing other student’s assignments.
  7. I enjoyed grading my peer’s work.
  8. I found grading my peer’s work difficult.
  9. I’m confident that the grading I did was fair.
  10. Because I knew that my peers would be seeing and grading my code for the first assignment, I coded it differently than I would have normally.

For questions 2, 5, 7, 8, and 10, participants were asked to expand with a written comment if they answered 3 or above.

Of the 30 participants in my study, 15 were in my treatment group, and therefore only 15 people filled out this questionnaire.

The graphs are histograms – that means that the higher the bar is, the more participants answered the question that way.

So, without further ado, here are the results…

It is unusual for me to see code written by my peers.

While there’s more weight on the positive side, opinion seems pretty split on this one.  It might really depend on what kind of social / working group you have in your programming classes.

It might also depend on how adherent students are to the rules, since sharing code with your peers is a bit of a no-no according to the UofT Computer Science rules of conduct.  Most programming courses have something like the following on their syllabus:

Never look at another student’s assignment solution, whether it is on paper or on the computer
screen. Never show another student your assignment solution. This applies to all drafts of a solution
and to incomplete solutions.

Of course, this only applies before an assignment is due.  Once the due date has passed, it’s OK to look at one another’s code…but how many students do that?

Anyhow, looking at the graph, I don’t think we got too much out of that one.  Let’s move on.

Seeing my peer's code taught me things I didn't already know.

Well, that’s a nice strong signal.  Clearly, there’s more weight on the positive side.  So my participants seem to understand that grading the code is teaching them something.  That’s good.

And now for an interesting question:  is there any relationship between the amount of programming experience of the participant, and how they answered this question?  Good question.  Before the experiment began, all participants filled out a brief questionnaire.  The questionnaire asked them to provide, in months, how much time they’ve spent in either a programming intensive course, or a programming job.  So that’s my fuzzy measure for programming experience.

The result was surprising.

For participants who answered 5 (strongly agreed that they learned things they didn’t already know):

Number of participants:  7
Maximum number of months:  36
Minimum number of months:  4
Average number of months:  16

For participants who answered 4:

Number of participants:  1
Number of months:  16

For participants who answered 3:

Number of participants:  4
Maximum number of months:  16
Minimum number of months:  8
Average number of months:  13

For participants who answered 2:

Number of participants:  1
Average number of months:  5

For participants who answered 1 (strongly disagreed that they learned things they didn’t already know):

Number of participants:  1
Average number of months: 16

So there’s no evidence here that participants with more experience felt they learned less from the peer grading.

This was one of those questions where participants were asked to expand if they answered 3 or above.  Here are some juicy morsels:

If you answered 3 or greater to the question above, what did you learn?

I learned some tricks and shortcuts of coding that make the solution more elegant and sometimes shorter.

…it showed me how hard some code are to read since I do not know what is in the programmer’s head.

I learned how different their coding style are compared to mine, as well as their reasoning to the assignment.

l learned about how other people think differently on same question and their programming styles can be different very much.

one of the codes I marked is very elegant and clear. It uses very different path from others. I really enjoyed that code. I think good codes from peers help us learn more.

I didn’t know about the random.shuffle method.  I also didn’t know that it would have been better to use Exceptions which I don’t really know.

The different design or thinking towards the same question’s solution, and other ways to interpret a matter.

Other people can have very convoluted solutions…

Different ways of solving a problem

A few Python shortcuts, especially involving string manipulation. As well, I learned how to efficiently shuffle a list.

algorithm (ways of thinking), different ways of doing the same thing

Sometimes a few little tricks or styles that I had forgotten about.  Also just a few different ways to go about solving the problem.

So what conclusions can I draw from this?

It looks like, regardless of experience, students seem to think peer grading teaches them something – even if it’s just a different design, or an approach to a problem.

Because I saw and graded my peer's work, I believe I know more about the quality of my own work.

Another clear signal in the “strongly agree” camp.  This one is kind of a no-brainer though – seeing work by others certainly gives us a sense of how our own work rates in comparison.  We do this kind of comparison all the time.

Anyhow, my participants seem to agree with that.

I am interested in knowing how my peers graded me.

Again, a lot of agreement there.  Students are curious to know what their peers think of their work.  They care what their peers think.  This is good.  This is important.

I would have written the code for my first assignment differently if I had seen the rubric beforehand.

Hm.  More of a mixed reaction here.  There’s more weight on the “strongly agree” side, but not a whole lot more.

This is interesting though.  If I find that my treatment group does perform better on their second assignment, is it possible that their improvement isn’t from the grading, but rather from their intense study of the rubric?

So, depending on whether or not there’s an improvement, my critics could say I might have a wee case of confounding factor syndrome, here.

And I would agree with them.  However, I would also point out that if there was an improvement in the treatment group, it wouldn’t matter what the actual source of the learning was – the peer grading (along with the rubric) caused an improvement.  And that’s fine.  That’s an OK result.

Of course, this is all theoretical until I find out if there was an improvement in the treatment group grades.  Stay tuned for that.

Anyhow, this was another one of those questions where I asked for elaboration for answers 3 and up.  Here’s what the participants had to say:

If you answered 3 or greater to the question above, what would you have done differently?

I would have checked for exceptions (and know what exceptions to check). I would have put more comments and docstrings into my code. I would have named my variables more reasonably.

I would’ve wrote out documentation. (ie. docstrings) Though I found that internal commenting wasn’t necessary.

i’ll add more comments to my code and maybe some more exceptions.

Added comments and docstrings.

Code’s design, style, clearness, readability and docstrings.

Made more effort to write useful docstrings and comments

I would’ve included things that I wouldn’t have included if I was coding for myself (such as comments and docstrings).

Added more documentation (I forget what it’s called but it’s when you surround the comments with “” ”’ “”)

Written more docstrings and comments (even though I think the code was simple enough and the method names self-explanatory enough that the code didn’t need more than one or two terse docstrings).

I forgot about docstrings and commenting my code

So it sounds like evaluation on documentation wasn’t clear enough in my assignment specification.  There’s also some indication that participants thought that documentation wasn’t necessary if the code is simple enough.  With respect to Docstrings, I’d have to disagree, since Docstrings are overwhelmingly useful for generating and compiling documentation.  That’s just my own personal feelings on the matter, though.

During this experiment, I enjoyed seeing other student's assignments

Note: this is not to be confused with “I enjoyed grading my peers’ work”, which is the next question.

Mostly agreement here.  So that’s interesting – participants enjoyed the simple act of seeing and reading code written by their peers.

I enjoyed grading my peer's work.

It looks like, in general, students don’t really enjoy grading their peers’ code. Clearly, it’s not a universal opinion – you can see there’s some disagreement in the graph.  Still, the trend seems to go towards the “strongly disagree” camp.

That’s a very useful finding.  There’s nothing worse than sweating your butt off to design and construct a new task for students, only to find out that they hate doing it.  We may have caught this early.

And I don’t actually find this that surprising:  code review isn’t exactly a pleasurable experience.  The benefits are certainly nice, but code review is a bit like flossing… it just seems to slow the morning routine down, regardless of the benefits.

Here’s what some participants had to say about their answers:

If you answered 3 or greater to the question above, why did you enjoy grading your peer’s work?

Because I like to compare my thoughts and other people’s thoughts.

well, some of the codes are really hard to read. But I did learn something from the grading. And letting students grade the codes is more fair.

I got to see where I went wrong and saw more creative/efficient solutions which will give me ideas for future assignments. But otherwise it was really boring.

So that I can learn from my peer’s thinking which gives me more diversity of coding and problem-solving.

Sometimes you see other student’s styles of coding/commenting/documenting and it helps you write better code. Sometimes you learn things that you didn’t know before. Sometimes it’s funny to see how other people code.

It was interesting to see their ideas, although sometimes painful to see their style.

not so much the grading part, but analyzing/looking at the different ways of coding the same thing

It gave me a rare prospective to see how other people with a similar educational background write their code.

Makes you think more critically about the overall presentation of your code.  You ask yourself : “What would someone think of my code if they were doing this?  Would I get a good mark?”

I found grading my peer's work difficult.

This one is more or less split right down the middle, with a little more weight on the agree side.

Again, participants who answered 3 or above were asked to elaborate.  Here are some comments:

If you answered 3 or greater to the question above, what about grading your peer’s work was difficult?

The hardest part was trying to trace through messy code in order to figure out if it actually works.

Emotionally, I know what the student is doing but I have to give bad marks for comments or style which makes me feel bad. Sometimes it is hard to distinguish the mark whether it is 3 or 4. The time was critical (did not have time to finish all papers) which might result in giving the wrong mark. I kept comparing marks and papers so I could get almost the fairest result between all students. It is hard to mark visually, i.e. not testing the code. Some codes are hard to read which make it hard for marking and I can assume it is wrong but it actually works.

Giving bad marks are hard!  Reading bad code is painful!  It wasn’t fun! 🙁

It just became really tedious trying to understand people’s code.

To test and verify their code is hard sometimes as their method of solving a problem might be complicated. I need to think very carefully and test their code progressively.

The rubric felt a little too strict. Sometimes a peer’s code had small difficulties that could easily be overcome, but would be labeled as very poor. Also, the rubric wasn’t clear enough, especially on the error handling portions and style. There could be many ways of coding for example the __str__ functions (using concatenation versus using format eg. ‘ %s’ % string as opposed to using + str(string) +)

I just found it hard to read other’s code because I already have a set idea of how to solve the problems. I did not see how the solutions of my peers would’ve improved my own solutions, so I did not find value in this.

Reading through each line of code and trying to figure out what it does

Reading through convoluted, circuitous code to determine correctness.

Not every case is clear-cut, and sometimes it’s hard to decide which score to give.

Being harsh and honest.  I guess it’s good not to ever meet the people who wrote the codes (unlike TAs) because they aren’t there to defend themselves.  Saves some headaches 🙂

I'm confident that the grading I did was fair.

Ok, more or less full agreement here.  At least, no disagreement.  But also no full agreement.  It’s sort of a lethargic “meh” with a flaccid thumbs up.

The conclusion?  My participants felt that, more or less, their grading was probably fair.  I guess.

Because I knew that my peers would be seeing and grading my code for the first assignment, I coded it differently than I would have normally.

Now this one…

This one is tricky, because I might have to toss it out.  Each one of my participants was told flat out that other participants in the study may or may not see their code.  This is true, since the graders are also participants in the study.

However, I did not outright tell them that other participants would be grading their code for the first assignment.  So I think this question may have come as a surprise to them.

That was an oversight on my part.  I screwed up.  I’m human.

The two lone participants who answered 3 or above wrote:

If you answered 3 or greater to the question above, what did you do differently?

Making the docstring comments more clear, simplifying my design as possible, writing in a better style.

Added a bit more comments to explain my code in case peers don’t understand.

Anyhow, so those are my initial findings.  If you have any questions about my data, or ideas on how I could analyze it, please let me know.  I’m all ears.