Monthly Archives: September 2009

alertCheck Grows Up

Remember that Firefox add-on I wrote up a while back – the one that allows you to suppress annoying alert popups?

Some updates:

  • alertCheck just went public on Mozilla Add-ons after being reviewed by an add-ons editor
  • As of this writing, alertCheck has 740 downloads
  • As of this writing, alertCheck has 4 reviews – all positive

Not bad for my first add-on!

Learning to Read Things, and Code as Art

A while back, I went on a little rant about how CS students don’t learn how to read and critique code, and that because of this, they’re missing out on a huge opportunity for learning.  They’re not exercising their abilities to comprehend other people’s code.

But let’s step back for a second.  Forget code for a minute, and let’s just think about reading in general.

Check out this article that Greg Wilson forwarded to Zuzel and I.

The take home message:  reading is not a skill that can be sharpened in a vacuum.  One of the keys to increasing reading comprehension is to increase the variety of reading material to practice with.  This will help build the foundation needed to understand more sophisticated material.

Actually, this really doesn’t come as a surprise to me.  In the Drama department, before diving into a new script, we are encouraged to briefly study the time period (slang, accents, and turns of phrase in particular) and the subject matter of the play.  This builds a foundation of context, so that the script makes some sense the first time it is read.  And the first time a script is read is very important, because for me,  it lays down many of the assumptions that I carry throughout my time working with the play.  Every subsequent reading of that play builds off of the first reading.

And have you ever read T.S. Elliot’s Wasteland?  Good luck making sense of that unless you have a study guide, because that guy references a ton of stuff, and expects you to fill the gaps.

So let’s direct this back to reading source code.

Taking that article (and this video) under consideration, it seems that just slapping some code up on a projector, and getting people to talk about it might not be good enough.

Something like this might be a better idea:  check out this post by Jason Montojo.  If his CS art history course ever takes off, I’ll be first in line to sign up.

The idea of studying code as art is intruiging…maybe I’m not the only one who thinks of it like sculpture.

Exploring Peer Review in the Computer Science Classroom: Part 2 (Exciting Conclusion)

Now, where was I?

Oh yeah, I was reviewing this paper, and getting right to the good part – the experiment method, data, and results.

I hate to disappoint you, but this section starts as a bit of a downer:

…Unfortunately, the data collected was spotty, to say the least, and was not linked together well enough to support a reasonable, detailed analysis to meet our goals.

Yikes.  Oh well, let’s see what happened…

One issue we discovered was that our design was too broad to give definitive results.  We did, however, have enough data to allow us to narrow down the focus for a second study.  From the analysis of one class, we were able to identify two interesting areas for further research.  The type of review appears [sic] have a significant effect on the length and focus of the review.  We also found evidence that students reviewed some of the concepts differently than they did others.  Both of these findings should be explored in the future work.

Good.  At least there’s some groundwork for somebody to do a future study.

Reading on, I’m impressed with the scope with which the writers performed their experiment.  For example, instead of just experimenting on a single classroom, these people used experiments across 8 classrooms, and each classroom had a number of participants ranging from 10 to 60.  Ambitious.  Nice.

While their experiment may have been too broad to get the results they were looking for, it certainly has more authority than the studies that just used a single, small class.

However, maybe I spoke too soon:

The data collected for this study did not occur as smoothly as we would have wished. … As a result, we were able to collect a large amount of data but it is not as complete as we would have liked.”

Hm.  Doesn’t sound that great.

Apparently, data was supposed to be collected from surveys, review rubrics, and questionnaires, but it looks like the questionnaire data kind of fell through:

The number of responses to the questionnaires was low.  Three classes had no post-questionnaire responses at all.  Of those classes that did have responses for the second questionnaire, the number was too small to lend any confidence to a statistical analysis…

Review rubric data was a little more interesting – 996 completed rubrics were collected from 299 reviewers from the 8 classes over the course of the study.  Nice.  But, again, it wasn’t all flowers and hugs:

While the amount of data was large, it is also incomplete.  Of those classes which were intended to have both training and a peer reviews [sic], three of them were not able to complete the second review assignment and, so, have nothing to be compared to.  Two of the other classes have only a moderate number of participants (15-20) which is not as high as we would have liked for our statistical analysis.  One class provided no viable information at all…

Things just seem to get worse and worse for these guys.

And, not to kick them while they’re down, but their writing seems to get worse and worse too.  I’m noticing more typos and tense errors as I go along.  Maybe the stress was getting to them…

So, what did they find?  Drum roll, please…

Final Results

Surprise!  It’s inconclusive!

It sounds like they found more questions than answers…

Is it type or order (or both) that is causing the effects the [sic] training and peer review?

It took me a little while to figure out what they were asking here.  Apparently, before engaging in peer review exercises, students would critique material that was provided by the instructor.  It seems that the training review step, on average, generated more verbose comments (though, not necessarily relevant comments) than the peer review step.  But the peer review step tended to produce more relevant comments.  The experimenters have a couple of theories on why that is:

  1. Social pressure could cause students to be less verbose on their critiques on one another.
  2. Increased learning after the training exercise could cause the students to be more succinct and precise in their reviews
  3. Students were more engaged in the training exercise, and felt more inclined to be more verbose (even though they were less precise)

Unfortunately, the experimenters note, a lot rests on the motivation and attitude of the students, which wasn’t really considered or measured during the design of the experiment.  So, to break it down simple, the type of review (training vs peer) and the order (training first, then peer review), caused some numbers to change in their tables…but they don’t really know why.

They had questions on other things too…

Why are there differences in how concepts are reviewed?

  • Are there differences in conceptual difficulty?
  • Do the reviews improve student learning of these concepts?

The three CS topics that were focused on during these courses were the OOP concepts of Abstraction, Decomposition, and Encapsulation.  The experimenters also theorized that successful reviews go through the following steps:

  1. Analysis
  2. Evaluation
  3. Explanation
  4. Revision

(Unspecified) variations in how the students used these steps, and how verbose they were at each step, caught the experimenters’ attention.  They wonder if this has something to do with the conceptual difficulty of each topic, or if the reviews were effecting their understanding over time.

More questions they brought up…

Is reviewing an engaging and interesting task in computer science?

Very good question.  The experimenters noted that they had no measure of student’s interest, feeling, and engagement in the reviewing process.  They note that it is important to look at these attitudes over time for improvements or problems.

Are there significant learning benefits to reviewing in the early computer science curriculum as compared to other, common homework/lab exercises?

I’ll let them explain this one:

While we have identified a number of potential benefits from reviewing, we have not shown that it is better than or as good as what we currently do.  We require some sort of baseline to compare our efforts to.  We need a control group in our experiments in order to judge effectiveness.

And then the paper pretty much ends.

Where To Go From Here

The authors do a good job of lining up some interesting questions towards the end.  I guess this is how you salvage an experiment that didn’t go as planned – find the deeper questions, and see if somebody else can do a better job.

Or maybe, if you give the authors enough time, they’ll try to do the better job themselves. I think I’ve found the next paper to review.

Exploring Peer Review in the Computer Science Classroom: Part 1

Exploring Peer Review in the Computer Science Classroom

by Scott Turner and Manuel A. Pérez Quiñones

I’m new at reading papers, so I’ve gotten used to 5 or 10 page-ers.  This looks like the big one, though – 69 pages.

I assume they have something significant to say.  The title sure sounds interesting, especially considering what I’m looking for.

Anyhow, it’s a big paper, and there’s a lot to go through.  Let’s get started.

Right off the bat, I can see that they’re interested in the same problem that I am:

Peer review, while it has many known benefits (Zeller 2000; Papalaskari 2003; Wolfe 2004; Hamer, Ma et al. 2005; Trytten 2005), and is used extensively in other fields (Falchikov and Goldfinch 2000; Topping, Smith et al. 2000; Liu and Hansen 2002; Dossin 2003; Carlson and Berry 2005) and in industry (Anderson and Shneiderman 1977; Anewalt 2005; Hundhausen, Agrawal et al. 2009), is not as widely used in the computer science curriculum.  This may be due, in part, to a lack of information about what, who and when to review in order to achieve specific goals in computer science.

This next part got my attention:

That is not to say that the literature is silent on these issues.  The studies make these choices but there are few reasons given for the decisions and even fewer comparisons performed to show relative value of those options.

Holy smokes.  A pretty bold critique of those previous papers (at least one of which, I’ve already reviewed).  It sounds like Scott and Manuel were as disappointed in some of the peer code review literature as I was.  If I was part of an audience that was being read this paper aloud, I would “wooo!” at this point.

What is needed is a clearer understanding of the requirements that the discipline imposes on the peer review process so that it may be effectively used.

Cool – I’m looking forward to seeing what they dug up.

Reading onwards through the introduction, I’m seeing the same basic arguments for peer code review that I’ve seen elsewhere.  I’ll summarize:

  • PCR (peer code review)  involves active use of higher levels of Bloom’s Taxonomy:  synthesis, analysis and evaluation, both for reviewers and review-ees
  • PCR prepares students for industry, since code review is (or should be) a common part of professional software development

Soon after, a good point is brought up – PCR is potentially a beneficial learning activity, but it all depends on the goals of a particular assignment.  A particular review structure may be better for improving code quality, and another might be better for increasing student motivation.  These considerations need to be taken into account when choosing a PCR structure.

By PCR structure, I think the writers mean:

  • What is being reviewed – source code vs design diagrams, for example
  • Who is being reviewed – students could review one another, or they could all review a piece of code provided by the instructor
  • When the review occurs – what level of students should take part in PCR?  Is there a minimum grade level that must be achieved before PCR is effective?  And when, during a project, should PCR happen?  Early in the design process, or after-the-fact – like a “code autopsy”?

These are good questions.  No wonder this paper is so long – I think their experiment is going to try to answer them all.

And just when things are starting to get good…they go into a literature review.  Yech.  I know it’s important to lay the groundwork, but I think I just felt my eyes turn to oatmeal.

The literature starts by briefly listing off those sources again – past papers who have tried to deal with this topic.  They categorize them based on what the papers try to deliver, and then they give them a light slam for not having a scientific basis on which to form the structures of their PCR structures.  I heartily agree.

The first part of the literature review discusses the benefits of peer review in other fields.  Papers as far back as the 1950’s are cited.  I think it goes without saying that having other people critique your work can be a great way of receiving constructive feedback (ask any playwrite, for example).  I guess these fellas feel they have to be pretty rigorous though, and really ground their argument in some solid past work.  Power to them.

An interesting notion is brought up – peer reviews over an extended length of time within the same groups helps cultivate “interaction between…peers and for the building of knowledge”.  One-time reviews, however, “[lends] itself more to a cognitive approach…more attention can be paid to the changes in the students’ thought processes”.  Hm.

This fellow named Topping seems to be quite popular with these guys.  Apparently, he came up with something called “Topping’s Peer Review Topology”, and I get the feeling that he is one of their primary sources in figuring out different ways of constructing PCR structures.

Oh, and an important morsel just got snuck in:

We are interested in how the student is affected by the acts of reviewing and being reviewed rather than by the social interaction occurring during the process.

So that sense of community that that other paper was talking about – not being looked at here.

The paper then goes on to chisel down some of the jargon from Topping’s Topology, and make it fit the field of Computer Science.  By doing this, the writers simpy reiterate the variables they’re going to be playing with:  what, who, and when.

The paper then dives into a two page summary of some peer review papers, and the various results that they found.  They note a few instances where peer review seemed to improve student performance, and other cases where peer review resulted in semi-disaster.  There are just as many theories as there are conflicting accounts.  From reading this stuff, it seems that peer review is a vast and complicated topic, and nobody seems to really have a firm grasp on it just yet.

Likewise, rubric creation seems to be a bit of a contentious topic:

While there seems to be a general consensus that rubrics are important and that they improve the peer review activity, there is not as much agreement on how they should be implemented.

However, it is clear that rubrics are useful in peer review for novice reviewers:

Rubrics can supply the guidance students need to learn how to evaluate an assignment.  It provides the needed scaffolding until the students are comfortable witht he process and the domain to make correct judgements.

Makes sense to me.

The paper then asks an important question: what makes a “successful” review in the education context?  Greater understanding?  Learning new concepts?  Improved grades?  Better designs?  Fewer code defects?

The answer:  it really depends on the instructor, and what the course wants from the PCR process.  For example, what is more important – having the reviewer learn to review?  Or having the review-ee receive good feedback?  Or both?  PCR is complex, and has lots of things to consider…

The next section highlights how technology is used to support peer review.  One particularly interesting example, is of a Moodle module that allows for peer reviews on assignments.  Apparently, the authors are fans.  I’ve never used Moodle before, and haven’t yet found the module that they’re talking about, but it sounds worth investigation.

The very next section details their experiment – their method, their data, and their results.  However, this post is getting a bit long, so I’m going to stop here, and continue on in a second post.

Stay tuned for the exciting conclusion!

Integrating Pedagogical Code Reviews into a CS 1 Course: An Empirical Study

Integrating Pedagogical Code Reviews into a CS 1 Course:
An Empirical Study

by Christopher Hundhausen, Anukrati Agrawal, Dana Fairbrother, and Michael Trevisan

Here’s another paper talking about using code review in a first year programming course.  Let’s give it a peek.

The idea of this paper is to introduce a type of “studio-based” learning environment for programmers.  The term originates from architecture, where architecture students bring various designs into class, and then the classroom as a unit critiques and evaluates the design.  In the studio-based learning model, the instructor guides the critiques.  This way, not only does the critiqued student get valuable feedback, but the critics also learn about what to critique.  Remember when I talked about learning how to read code? Studio-based learning seems to be one solution to that problem.

Studio-based learning could also help foster community among the students.  In a field where isolated cubicles are still quite normal (I’m sitting in one right now!), a sense of community could be a welcome relief.

The paper also notes the following:

…the peer review process can (a) prepare students to deal with criticism [1], teach students to provide constructive criticism to others [1], provide students with experience with coming to a consensus opinion in cases where opinions differ [1], and build teamwork skills [8] . All of these are “soft” or “people” skills that are likely to be important in their future careers.

I whole-heartedly agree.

The paper then goes on to give a quick survey of how peer review (not just for code, but peer review in general) is currently used in education:

  • pair programming (which is what I experienced at UofT)
  • “in-class conference” model
  • peer feedback meetings (mostly for project documentation)
  • web-based peer grading solutions (RRAS, and Peer Grader)
  • peer code review

Maybe I’m just getting tired of reading these papers, but I found this section particularly difficult to get through.  I had to read it 3 or 4 times just to understand what was going on.

Finally, they get on with it, and talk about their approach to peer code review – a student oriented version of a formal code review.  Yikes – if by formal code review, they mean a Fagan Inspection, then these poor students are in for some long meetings…

Here is their 5 step outline of their approach:

  1. Plan the inspection of a specific piece of code.
  2. Hold a kick-off meeting with an inspection team to distribute the code to be inspected, train the team on the process, and set inspection goals.
  3. Have members of the inspection team inspect the code for defects on their own time.
  4. Hold an inspection meeting to log issues found by individual members’ inspections, and to find additional issues.
  5. Edit the code to address the issues uncovered in the inspection, and verify that the issues have been resolved.

Yep, that sounds a lot like a Fagan Inspection.  As I’ve already found out, there are lighter, faster, and just-as-effective approaches to peer code review…

After using their approach, their study noticed:

  1. A positive trend in the quality of the students code, and a negative trend in the number of defects found per review
  2. Discussion transitioned from syntax and style to more high-level concepts, such as architecture and design of the system.  Thus, meaningful discussion was generated using their technique.  The study notes, however, that “…we
    cannot provide any objective evidence that students were able to provide helpful critiques of each other’s code within the code reviews themselves.”
  3. There is anecdotal evidence that suggests that this work helped to create more of a community feel among the students in the class

Sounds like a lot of anecdotal evidence.

I don’t know, reading the results of this paper, I was disappointed.  The evidence/data they collected feels light and insubstantial.  I certainly support what the study was trying to do, and maybe I’ve read too many peer review papers, but I don’t find it surprising anymore to hear that peer code review improves students code.  It’s good to have evidence for that, but I was hoping for something more.

The paper closes with the writers almost agreeing with me:

There are several limitations to our results that suggest the need for more rigorous follow-up studies with both larger student samples and additional data collection methods.

I knew it wasn’t just me!

Anyhow, the paper finishes with the writers outlining how they’d do future studies.

And that’s that.