Monthly Archives: September 2009

Do Peer Code Reviews Seem Incompatible With The Traditional Classroom?

I recently wrote a post asking the question “why aren’t code reviews part of every undergraduate computer science curriculum?”, Gregg Sporar responded by saying

I don’t know of many professors who are really “into” code review.

This is so strange.

You would think that a process that provably helps reduce code defects earlier in development would be right up there in intro programming courses, along with “use a debugger instead of sprinkling print statements everywhere”.

So, what gives?

Well, one thought is that perhaps these courses are structured so that peer code review would completely undermine the ability to evaluate students correctly. I’ve been in plenty of classes where I’ve been instructed never to show my code to anyone else. We can vaguely discuss issues on the course bulletin board, but absolutely no code can be discussed directly. For these assignments, team work, plagiarism, and cooperative learning is a big no-no.

At first glance, it’s perfectly understandable:  it seems like the only way to fairly evaluate the abilities of each individual student. But still, it seems like that particular teaching model is really missing out on a huge opportunity to give their students valuable feedback.

I have to make some corrections, though. I haven’t been completely honest. I’ve made it seem like I’ve never had any code review done in my courses at UofT, and that’s not entirely true. Quick examples: in CSC180, I did pair-programming with another student – and pair-programming is certainly a type of peer code review. In CSC301, my team would rigorously review the diffs of each other’s repository commits, looking for flaws, and emailing back and forth defect reports. We weren’t explicitly instructed to do this, but it certainly seemed like a good idea at the time for the success of the project.

But it’s nothing like this.  Look at it this assignment from Carnegie Mellon.  What a great idea for an assignment!  How come this type of assignment seems so rare?

Back when I was an Electrical and Computer Engineering undergrad, one of my professors once told me that “great programmers write great code because they read great code”.  I think this is something that was missing in my formal education: how to read, review, analyze, and critique code. And above all, what beautiful, elegant code looks like!

In my academic Drama classes,  we poured over tons of famous, landmark plays.  And of course we tore them apart, boiling them down, analyzing them, critiquing and debating, arguing our pants off…  I got good at reading them.  My critical evaluation skills in this area are pretty sharp.  I can tell when I’m reading something really well done.  And, because of this, given some practice, I could probably write something half-decent if I put my mind to it.

So I think that’s something I’m missing from my computer science education.  Perhaps there’s a good reason for it’s absence – but if there is, I don’t know it.  Instructor moderated peer code reviews, or (even better!) code reviews of industry software…that sounds juicy.  That sounds useful.  That sounds like valuable feedback and instruction.

It’s too bad it never happened.

Anyhow, I haven’t answered my question yet.  I still need to finish my review of Smart Bear’s Cisco study.  I’ve also found a case study where code peer reviews were used as part of an introductory computer programming course – I’m interested to see what they found out.

Taking a peek at Jason Cohen, Smart Bear, and Code Collaborator

Check this out.

Hang on, hang on, let me back up.

Jason Cohen.  Is that name familiar?  If it isn’t, this is the guy who founded Smart Bear Software.  Smart Bear Software has a piece of software called Code Collaborator, which is a “web-based tool that simplifies and expedites peer code reviews.”  Here’s the sales pitch.

The reason I’m posting all of this is because of what I’m looking for – papers concerning code review, and the relationship (or lack thereof) between code review techniques and computer science education.

Which all traces back to that first link I posted – a series of whitepapers and articles on Code Collaborator, and code reviews in general.  This is good stuff.  Sure, it’s not really oriented around computer science education, but these people seem to know what they’re talking about.

There’s even a free book, which details their massive code review study, which is, apparently, the largest-ever case study of its kind.

I’ve ordered the free book, just for kicks.  In the mean time, I’m going to glance over their Cisco study.

“Code reviews” by Arjen Markus (2009)

Code Reviews

by Arjen Markus
Deltares, The Netherlands
ACM Fortran Forum, August 2009, 28, 2

This is one of the first papers I found.  Consider it my “warm up” paper.

According to the header, Arjen Markus works for “Deltares”, and after a quick Google-hunt, I found out that Deltares is a “new independent Dutch institute for national and international delta issues”.

Upon closer inspection, it seems that Markus’ paper is concerned with what reviewers should be looking for during code reviews:

“What should you be looking for in the code?  It is not enough to check that the code adheres to the programming standard of the project it belongs to.  Such a standard may not exist, be incomplete or be focussed on layout, not on questionable constructs that are a liability.  With this article I would like to fill in this practical gap, at least partly.” (Page 4)

This isn’t exactly what I set out to look for, but I thought I’d give it a once-over anyways.

Markus’ paper is not what I would call a rigorous scientific publication.  There is no empirical data, no hypothesis, none of that good ol’ scientific method stuff.  Instead, it’s more akin to a “do” and “do not” set of advice and examples that one would find in a software engineering textbook.

A FORTRAN software engineering textbook, to be more precise.  Markus’ examples are all in FORTRAN.

Broken down simply, Markus has four principles, or bits of advice:

“The importance of being explicit”

Essentially, this means to be clear with what you’re doing in the code.  It’s common sense stuff:  don’t be overly clever, be readable, don’t use magic numbers or strings, document your code, group related routines into the same modules, use information hiding in your modules when appropriate, clear and precise error messages, etc.

“Don’t go your own way”

Markus advises developers to stick to an agreed coding standard / style guide.  Don’t reinvent the wheel – instead, use typical solutions to problems that arise.  “Don’t go against the grain” (Page 7).

“Be careful out there”

Markus advises developers to watch out for documented language quirks, common language pitfalls, etc.  This is followed by numerous examples in FORTRAN.

“Curiouser and curiouser”

Markus asks to keep an eye out for “a lack of attention to design, to readability and other aspects that are important for the program in the long run” (Page 11).    He also repeats a few things from “the importance of being explicit” – mainly, to make sure that the code is organized in a way that makes sense to the developers.

I don’t know.  I don’t think I’m the target audience here.  In hindsight, I found the information in this paper to be very general, and rather self-evident.  The only thing I seemed to learn was a bit about FORTRAN quirks.

I think I need to be less laissez faire in my paper selections.  This one didn’t help me find what I was looking for, and I should have seen that from the abstract.  Bah.

EDIT:  Why did I waste my time searching ACM when more interesting information was waiting right under my nose?  I think I have an idea what to review next.

Code Reviews

My graduate supervisor has asked me to look into the following problem:

Code reviews. They can help make our software better. But how come I didn’t learn about them, or perform them in my undergrad courses?  Why aren’t they taught as part of the software engineering lifecycle right from the get-go?  I learn about version control, but why not peer code review?  Has it been tried in the academic setting?  If so, why hasn’t it succeeded and become part of the general CS curriculum?  If it hasn’t been tried, why not?  What’s the hold up?  What’s the problem?

I’m to dive into academic papers regarding the above, and blog about what I find out.

Stay tuned.

Adventure Games

I wouldn’t consider myself a “gamer” by any stretch of the imagination.

Just watch me try to play Halo – I’ll run around in circles, button-jamming before somebody mercifully puts me out of my misery.

And that’s common across games that involve me running around with a gun.  With the exception of Portal (which was incredible), the whole first person shooter genre kind of bores me.  I’m not really impressed by amazing 3d graphics, or physics/particle engines.  I just…don’t care.  I just don’t get the same pleasure out of blowing up and shooting enemies that other people seem to.

I like something a bit more cerebral.  I like story (which is why Portal is an exception).  I like puzzle solving.  I like thinking for a character, not just running around, pulling the trigger for a character.

And this is the way it’s always been for me.

But ever since I was a kid, I’ve had a real passion for adventure games. I used to play all of the old Sierra stuff…Kings Quest I-VI (before it turned lame – VII onward), the Space Quest series (5 being my favourite, but I have a soft spot for the original), the Gabriel Knight series…

And that’s just Sierra.  LucasArts brought about Sam and Max, Day of the Tentacle (a masterpiece), The Monkey Island Series (also brilliant), The Dig (my personal favourite), Full Throttle, Grim Fandango

There was Myst, and Riven. So good.  And can’t forget Zork.

Simon The Sorcerer 1, 2, and 3…

I loved these games.  I still love these games.  I love being integrated so deeply into an interesting story, and having to rely on my wits and intelligence to solve problems.  I get a cerebral kick out of solving the various puzzles that the game designers throw at me.

And what these games all compel me to do, without fail….is:  to make one.  Make an adventure game.  Tell a good story.  Suck the player in, and throw puzzles, deep character, and thick story/plot at them.  After I finish one of these games, I usually end up Googling “how to make an adventure game”, and spending the rest of my night reading up on it.  And anytime I’m on a holiday break, and have nothing to do…I always gravitate to the subject.

I’ve read all about LucasArts SCUMM and GrimE engines…Sierra’s AGI and SCI engines… Adventure Soft’s AGOS engine…  and I’m grateful for the ScummVM and FreeSCI folk who share my passion, and have allowed me to play all of these old games on my Linux box.  Reading about these engines just makes me want to use them to build my own game.

I could try to use a number of closed-source adventure game engines out there – I hear the Wintermute engine is pretty good.  So is Adventure Game Studio.

But somehow…they never satisfy me.

The MAD adventure game engine is open source, and has some good ideas…but the code is also a bit of a mess, and hasn’t been maintained since 2003.

So I’ve always wanted to build my own adventure game engine, and solve all of the problems that one would have to in order to make it work:  path finding, graphics, animation, scripting, layering/masking, fonts…  Lots of neat sub-problems.   It’d be hard, but I also think it’d be a lot of fun.

So just watch me…it’ll happen someday.