(Click here to read the first part of the story)

I’m just going to come right out and say it: I’m no stats buff.

Actually, maybe that’s giving myself too much credit. I barely scraped through my compulsory statistics course. In my defense, the teaching was abysmal, and the class average was in the sewer the entire time.

So, unfortunately, I don’t have the statistical chops that a real scientist should.

But, today, I learned a new trick.

### Pearson’s Correlation Co-efficient

Joorden’s and Pare gave me the idea while I was reviewing their paper for the Related Work section of my thesis. They used it in order to inspect mark agreement between their expert markers.

In my last post on Grader agreement, I was looking at mark agreement at the equivalence level. Pearson’s Correlation Co-efficient should (I think) let me inspect mark agreement at the “shape” level.

And by shape level, I mean this: if Grader 1 gives a high mark for a participant, then Grader 2 gives a high mark. If Grader 1 gives a low mark for the next participant, then Grader 2 gives a low mark. These high and low marks might not be equal, but the basic shape of the thing is there.

And this page, with it’s useful table, tell me how I can tell if the correlation co-efficient that I find is significant. Awesome.

At least, that’s my interpretation of Pearson’s Correlation Co-efficient. Maybe I’ve got it wrong. Please let me know if I do.

Anyhow, it can’t hurt to look at some more tables. Let’s do that.

### About these tables…

Like my previous post on graders, I’ve organized my data into two tables – one for each assignment.

Each table has a row for that assignments criteria.

Each table has two columns – the first is strictly to list the assignment criteria. The second column gives the Pearson Correlation Co-efficient for each criterion. The correlation measurement is between the marks that my two Graders gave on that criterion across all 30 submissions for that assignment.

I hope that makes sense.

Anyways, here goes…

### Da-ta!

#### Decks and Cards Grader Correlation Table

[table id=8 /]

#### Flights and Passengers Grader Correlation Table

[table id=9 /]

### What does this tell us?

Well, first off, remember that for each assignment, for each criterion, there were 30 submissions.

So N = 30.

In order to determine if the correlation co-efficients are significant, we look at this table, and find N – 2 down the left hand side:

28 .306 .361 .423 .463

Those 4 values on the right are the critical values that we want to pass for significance.

Good news! **All** of the correlation co-efficients fall within the range of [.306, .463]. So now, I’ll show you their significance by level:

#### p < 0.10

- Design of __str__ in Decks and Cards assignment

#### p < 0.05

- Design of deal method in Decks and Cards assignment

#### p < 0.02

- Design of heaviest_passenger method in Flights and Passengers

#### p < 0.01

##### Decks and Cards

- Design of Deck constructor
- Style
- Internal Comments
- __str__ method correctness
- deal method correctness
- Deck constructor correctness
- Docstrings
- shuffle method correctness
- Design of shuffle method
- Design of cut method
- cut method correctness
- Error checking

##### Flights and Passengers

- Design of __str__ method
- Design of lightest_passenger method
- Style
- Design of Flight constructor
- Internal comments
- Design of add_passenger method
- __str__ method correctness
- Error checking
- heaviest_passenger method correctness
- Docstrings
- lightest_passenger method correctness
- Flight constructor correctness
- add_passenger method correctness

**Wow!**

### Correlation of Mark Totals

Joorden’s and Pare ran their correlation statistics on assignments that were marked on a scale from 1 to 10. I can do the same type of analysis by simply running Pearson’s on the totals for each participant by each Grader.

Drum roll, please…

#### Decks and Cards

p(28) = 0.89, p < 0.01

#### Flights and Passengers

p(28) = 0.92, p < 0.01

**Awesome!**

### Summary / Conclusion

I already showed before that my two Graders rarely agreed mark for mark, and that one Grader tended to give higher marks than the other.

The analysis with Pearson’s correlation co-efficient seems to suggest that, while there isn’t one-to-one agreement, there is certainly a significant correlation – with the majority of the criteria having a correlation with **p < 0.01!**

The total marks also show a very strong, significant, positive correlation.

Ok, so that’s the conclusion here: **the Graders marks do not match, but show moderate to high positive correlation to a significant degree.**

### How’s My Stats?

Did I screw up somewhere? Am I making fallacious claims? Let me know – post a comment!