Monthly Archives: September 2009

Possible Applications for my Adventure Game Obsession – Part 1

If you don’t know this already, I really dig adventure games.  Seriously.  Just click these words to see how much I dig them.

And I keep running into adventure game stuff in the most unexpected places.  A few days ago, Yuri Takhteyev from the Faculty of Information spoke to the Software Engineering group about his work studying the use and popularity of the Lua language in Rio de Janeiro, Brazil.  When he brought up Lua, I couldn’t help remembering that Lua was used by the GrimE engine to script Grim Fandango

Wouldn’t it be awesome to find a way of turning my passion for adventure games into something that is useful in the field of Computer Science?

My supervisor has advised me not to think too much about my research paper just yet, and to just peek around to get a feel for what’s going on in the various facets of Computer Science.  I take this advice to heart, and yet I can’t help noticing where my passion for adventure games might be applied…

Here are a few things I’ve come up with:

Storytelling Alice

Storytelling Alice is an attempt to find a fun, intuitive way of teaching basic programming with the Alice language to middle-school students.  It was designed and developed by Caitlin Kelleher as part of her PhD thesis at Carnegie Mellon.  Storytelling Alice is designed to use storytelling as a motivating context to get students to learn various programming techniques.

In Storytelling Alice, students are compelled to learn more in order to tell more of a story.  I wonder if they’d be willing to learn more to reveal more of a story?  This would be very similar to the way adventure games reward players with story after solving a puzzle.

Check out Storytelling Alice here

Storyboarding

I’ve recently started taking Khai Truong’s CSC2514 – Human-Computer Interaction course.  One of the first papers he got us to read this week was one that he’d written on storyboarding.

Put simply, storyboards are used by interaction designers as a low-cost way of testing out designs with their potential audience.  They are similar to the storyboards used in writing/designing movies or television productions, but are instead used to communicate use cases, environment of use, physical embodiment of the system, etc.

Here is a copy of the paper, if you’re interested in reading it.

Here’s something I found interesting:

Commercial products marketed specifically for storyboard creation are available, but they are designed for experts and can be difficult for novices to use … Also, expert designers expressed that the greatest challenge for them is storytelling.  These software products are not designed to support that process and may even be detrimental to it, because they do not provide complete creative flexibility in terms of what can be developed.

Very interesting.  Adventure games are designed from the ground up to tell a story.  I wonder if the tools that adventure games are created with could lend something to these storyboard creation tools?

As my studies continue, perhaps I’ll report more potential uses for adventure game technology.

Until then, I’ll leave you with a clip from a playthrough from one of my favourite adventure games of all time, The Dig.  It might not be Dicken’s, but damned if it doesn’t hold my attention with an iron grip.

Just brilliant.

A Few Things Drama Can Bring to Computer Science

So, yesterday I wrote:

[W]hat can Drama bring to Computer Science?

The easy one is presentation/communication skills.  A CS student might be brilliant, but that doesn’t mean they can present or communicate.  And if an idea can’t be communicated, it’s worthless.

But what else?  Any ideas?  I’m going to think about this for a bit, and I’ll see if I can come up with any more.

I posted the question on Twitter, and on my Facebook.  I was quite surprised by the amount of feedback I got back – apparently, quite a few people are interested in this topic.

Thanks for everybody who posted, or who came up to talk to me about this!  Let me summarize what I heard back:

  • Without a doubt, work in Drama hones movement/body senses.  It also trains us to use and take care of our body, and voice, like a musician would take care of a musical instrument.  Spending too much time hunkered over a keyboard can have detrimental effects on the body over time – I can personally admit to having absolutely awful shoulder tension, no doubt to my constant typing.  I only became aware of this tension, and how to deal with it, thanks to my work in Drama.  The dichotomy between body and mind is, in my humble opinion, a Western myth, and when you stop separating them, and get them to work together, amazing things can happen.  Just ask any contact improviser.
  • Drama is also emotional work.  No, this doesn’t mean we sit in a big circle and cry, and get credit for it.  Emotions are something that we study – how to mimic them, how to summon them out of ourselves, how to describe them, and abstractly represent them.  This is where Psychology, Drama, and Human-Computer Interaction might have some overlap.  In particular, it must be remembered that theatre is a communications medium between the actor(s) on stage, and the audience.  A webpage is also a communications medium.  Perhaps the theatre can teach a website a thing or two about communication.  I wonder what Marshall McLuhan would have to say on all of this…
  • Drama folk are creative, and are used to doing impossible, unreasonable things.  If you ask them to fly, they’ll figure out a way of doing it.  It’ll probably be abstract, and involve crazy lighting effects, but they’ll do it.  Production Managers are used to getting crazy, impossible requests from Directors all the time.  In my opinion, that’s what Directors are for!  Sometimes (usually due to time constraints), the Production Manager just says no to the Director – usually, though, they just go ahead and make impossible things happen – like building a triple layered reflection box.  This thing was a beast, and used a ton of computing power for live, context sensitive visual effects. I’m proud to have been a part of that.
  • In Drama, if the project is no fun, the end result suffers.  I’m pretty sure the same goes for software.  Drama students have a way of finding the “game”, the “jeu”, and the “play” (that’s why it’s called a “play”, people!) in what they’re doing.  The best actors are the ones who are clearly having a great time on stage, and are sharing this with the audience.  I believe this is applicable to software development…
  • If you want to think about complex systems, think about the stage.  At any given moment, n actors are on stage, interacting with various bits of set or props, interacting with each other – and each has their own motivation and personal story.  It can’t be a coincidence that the I* modeling language orients itself around terms like “actors” and “goals”.  It also can’t be a coincidence that many adventure game engines refer to in-game sprites as actors…

But now I want to hit the big one.  There is one thing that I really think Drama can bring to Computer Science.  Drama students are very good at it.  From what I can tell, Computer Science students rarely get exposed to it.

That thing is collaboration skills.

I already know that a few of my fellow Drama students will laugh at that – and say, “there are plenty of people in this department without collaboration skills”.  Yes, this is true.  But they tend not to do very well, or produce anything too interesting.

For me, the best, most exciting stuff comes when I’m with a group, and we’re not sure where we’re going with a project, but we just try things. We all throw a bunch of ideas in the middle, and try to put them on their feet.  The most unexpected things can happen.

Two years ago, I took a course in Experimental Theatre.  We were broken down into groups of 3 or 4 right at the beginning of the term, and given this challenge – show us what you like to see in theatre.  Show us what you think good theatre looks like.

That was it.  A blank canvas.  No script.  No “spec”.  Just each other.  It felt hopeless at first – we’d improv things, trying to get a feel for what our group wanted to do.  Nothing would happen, it’d fall flat.  We were lost.

But slowly, something started to piece itself together.  We found some material that we wanted to play with (The Wizard of Oz), and a subject that we liked – “home”.  What it means to be home, why people leave their homes, why we miss home, why we can’t stand home, what if we can’t get home, etc.  We divided the work up into 4 sections – 1 for each of us:  Dorothy, Cowardly Lion, Scarecrow, Tin Man.

It’s really hard to describe what we did.  The characters and structure from The Wizard of Oz was just a playground for a huge meditation on what “home” meant to different people.

And, wouldn’t you know it, the Robert Dziekanski Taser Incident happened just a week or so before we were to present.  It integrated perfectly into our piece.

When we finally presented it, some people were incredulous, others nauseous, others outraged.  Some were crying.  We had a huge class debate on whether or not it was appropriate to include the film clip of the Taser Incident in our piece.

But a lot of people really got something out of it.  And I believe a bunch of people from that class went to a protest rally about the incident that took place only a few days later.  I heard a lot of really positive things.  We were so excited by it that we almost took it to the Toronto Fringe Festival.

In my opinion, that was one of the most interesting, educational, horrifying, and rewarding art pieces I’d ever been involved in.  And it all started from nothing.

When are Computer Science students grouped up, and told to make whatever they want?  When are they given total freedom to just go crazy, and come up with something beautiful?  Something unique?  When are they given the frightening prospect of a blank canvas?  Maybe I’m being naive – but where are the collaborative creativity assignments in computer science education?

Now, I can imagine someone shouting – “but what about those group assignments!  What about CSC318, or CSC301?  Those were collaborative!”.

My friend, thanks for trying, but there’s a distinct difference between group problem solving, and collaborative creation.  In my mind, for collaborative creation at its best, the ensemble starts with nothing and must create something from it.  It’s the difference between having a script to toy with, and not having a script at all.

And don’t just tell me that an independent study fits the bill.  The word “independent” sabotages the whole idea – the key word is collaborate.

Oh, and did I mention that Artful Making sounds like an excellent book? Why don’t you go to their website, and read the forward by Google’s own Dr. Eric Schmidt.  I found it very illuminating.  I think this is going to the top of my to-read list.

Thanks to Blake Winton, Veronica Wong, Cam Gorrie, Jorge Aranda, Neil Ernst, Peter Freund, Jennifer Dowding, and Yev Falkovich for their input on this.  Yes, those little conversations made an impact!

Process Improvement of Peer Code Review and Behavior Analysis of its Participants

Process Improvement of Peer Code Review and Behavior Analysis of its Participants

by WANG Yan-qing, LI Yi-jun, Michael Collins, LIU Pei-jie
SIGCSE ’08, March 12-15, 2008

If you’ve been following, I’ve been trying to figure out why code reviews aren’t a part of the basic undergraduate computer science curriculum.  The other papers and articles I’ve read so far have had less to do with the classroom, and more to do with code reviews in industry.

This paper got a little bit closer to the classroom, and, more importantly, closer to my particular question.

To begin, the paper introduces some terminology I’m not familiar with – the software crisis.  I’m familiar with the concept though:  writing good software for large systems is not a simple problem, and as computers become a bigger and more important part of our lives, this inability to easily write good code could quickly end up biting us in the collective rear.

Code review is one of several methods that the software industry has adopted to try to “tame” the software crisis.

I like this part:

Even though code reviews are time consuming, they are much more efficient than testing [19]. A typical engineer, for example,  will find approximately 2 to 4 defects in an hour of unit testing but will find 6 to 10 defects in each hour of review code [19].

What more argument do you need?  It’s just a matter of getting rid of that “time consuming” part, right?  Right…

And this is even juicier:

PCR [peer code review] is a technique which is generally considered to be effective on promoting students’ higher cognitive skills [9], since students use their own knowledge and skill to interpret, analyze and evaluate others’ work to clarify and correct it [2].

Wonderful!  I’m in my problem space!

Reading along, it seems that this paper is introducing a new, refined structure for PCR, and will detail results of a study on using that new structure in a programming course.  Cool.

The introduction ends by saying that the new structure seemed to enhance the quality of student’s work, as well as their ability to critique one another.  Great news!

It’s not all sunshine and puppies, though – they also mention that they ran into a few problems, and that they’ll be discussing those too.

So the first thing they’ve done, is tried to make the terminology clearer:

Roles

  • Author:  the student who writes the code that is being reviewed
  • Reviewer:  the person who is reviewing the code
  • Reviser:  the author, after receiving a Comments Form from a Reviewer
  • Instructor:  the teacher or qualified TA who is responsible for the class

Documents

  • Manuscript Code:  the unrevised code that is first submitted by an Author
  • Comments Form:  the comments given from the Reviewer to the Author
  • Revision Code:  the code that is revised by the Reviser after the Reviewer gives the Reviser the Comments Form (whew…follow that?)
  • Reference Solution:  the “answer” to the assignment, held by the Instructor

Now that we’ve got all the players and documents laid out, let’s take a look at the process:

Process

  • Phase 1:  The Author completes the Manuscript Code
  • Phase 2:  The Author emails the Manuscript Code to the Instructor.  Simultaneously, a blank Comments Form and a copy of the Manuscript Code is sent to a Reviewer
  • Phase 3:  The Reviewer reviews the code as soon as possible, filling in the Comments Form.
  • Phase 4:  The Reviewer sends the completed Comments Form back to the Author, and also sends a carbon copy to the Instructor
  • Phase 5:  After receiving the Comments Form, the Reviser (who was originally the Author…oh boy…almost went cross-eyed, there) makes the appropriate alterations to the original Manuscript Code, referencing the Comment Form where appropriate.  The completed Revision Code is emailed to the Instructor.
  • Phase 6:  The Instructor should now have a copy of the original Manuscript Code, the completed Comments Form, and the final Revision Code.  The Instructor should be able to check that the author and reviewer did their work properly.

Wow.  What a convoluted way of saying something simple.  They even included a diagram, with lots of arrows.  Somehow, I think this could be said simpler.  Oh well.

It also sounds like a lot of emailing.  You’re balancing your course on the reliability of the email protocol?  Errr….

Well, let’s see what problems they ran into…

  1. The assumption that all participants would carefully and responsibly carry out each phase of the process was faulty.  This may have been due to “careless authors, irresponsible reviewers and busy instructors in the review process”.
  2. Some students lack the coding ability to either:
    1. Produce code that is readable and reviewable in a constructive way
    2. Review code in a constructive, or informed way
  3. The process is difficult to control due to the reliance on email (no kidding!)
    1. Some students would not submit Manuscript Code or Comment Forms on time
    2. Some students would submit multiple copies of their Manuscript Code, due to an inherent mistrust of the reliability of email
  4. There was opportunity for students to “game” the process to their advantage. In this particular study, there was very little control of who was doing what.  Though a particular Author was supposed to write the Manuscript Code, this wasn’t enforced, and there was an occasion where another student wrote the code instead.  Same with review writing, and revision writing.  Yeah, cheating is always a problem.

The paper then goes into some discussion about the observed behaviour of Authors and Reviewers.  They noted that most students did not enjoy reviewing very poorly written code, and don’t give their best efforts on reviews for such code.  Mere encouragement from the instructor was not enough to compel them to give their best reviews either.  The paper suggests finding some way of making Reviewers review code more carefully; perhaps through awarding bonus marks.

Behaviour of Instructors was also analyzed.  The paper mentioned that Instructors with large class sizes might try to cut down on their workload by only viewing the Comment Forms that the Reviewers had provided.  But this strategy does not give the Instructor the entire story, and is open to manipulation from students.

The paper ends with a discussion about group formations, and how best to diffuse student cheating conspiracies.

At the last moment, they suggest some “web-based [application] with a built-in blind review mechanism” be developed.

Hm.

What Can Drama Bring to Computer Science?

Yesterday, a bunch of Greg Wilson’s grad students had dinner at his place.  We got to meet his wife, his daughter, and eat some pretty amazing food.  It also gave his new grad students an opportunity to say an official “hello”, and introduce themselves to everybody else.

After introducing myself as having had an undergraduate degree in Computer Science and Drama, somebody made some remark about what an interesting combination that is. Greg replied by saying something like “That’s why I chose him”, and told a story about how one of the best programmers he ever knew was originally training to become a Rabbi, and got into Computer Science because he was working on some translations of ancient texts.

This got me thinking.  When I started focusing on both Drama and Computer Science, I remember always finding ways where Computer Science could help Drama.  I can easily rattle off a bunch of examples:

  • Better, more flexible sound cueing software (QLab is nice, but I think we can go deeper)
  • Communication tools for production teams, to help coordinate stage managers, directors, production managers, etc
  • Interfaces for movement artists to communicate with computers with their bodies in real-time, which in turn can drive things like sound/lighting cues, or other stage effects
  • Tools for doing cool, advanced projections – check out Lighttwist for example
  • Programming environments / domain specific languages for production crews who have to program lighting, sound, and video cues.  We used Isadora at the UCDP, which is like PureData with more of a GUI.  But…again…maybe we could do better.

So, while I was at the UCDP, all of these ideas rattled around in my head. I’ve now come to the realization that this has been completely one-sided.

So let’s switch it around – what can Drama bring to Computer Science?

The easy one is presentation/communication skills.  A CS student might be brilliant, but that doesn’t mean they can present or communicate.  And if an idea can’t be communicated, it’s worthless.

But what else?  Any ideas?  I’m going to think about this for a bit, and I’ll see if I can come up with any more.

UPDATE: So here’s what I found…

Smart Bear, Cisco, and the Largest Study on Code Review Ever

In 2006, Smart Bear software teamed up with the MeetingPlace development group at Cisco Systems, and over 10 months, produced the “largest-ever case study of its kind” on a “light-weight code review process”.

The results of the study can be found in the free book “Best Kept Secrets of Peer Code Review”.

They can also be found in one of the sample chapters that they’ve put on the site.  You can read the study right here, if you’re interested.

Here are my thoughts on the chapter…

First of all, my guard is up a bit. This all seems a bit like a sales pitch, since the software that Cisco ends up using is Smart Bear’s own Code Collaborator. I’m reading the first paragraph, and already I know how it ends – “everybody is happy, the software is improved dramatically, so you should buy Code Collaborator”. Something like that. I’ll be happy when I see some solid data, some numbers, some graphs…

Ok, I’m in at page 54 – they’re talking about how data was collected, and how they pared it down to get the most meaningful results.  This is good.  This sounds like science, and not a sales pitch.  Nice.

The next thing the study talks about is the rate that lines of code (LOC) are analyzed at – the LOC inspection rate.  The data they’ve collected shows no discernible 1-1 correlation between the LOC inspection rate, and the amount of code to inspect.  There were rare exceptions where a reviewer would seem to have such a correlation, but these reviewers tended to be novices who had not participated in many reviews before.  Analyzing the LOC inspection rates by code authors (those who are having their code reviewed) also failed to show any correlations.  In fact, there were several cases where separate reviewers took widely varied amounts of time on the same chunk of code under review.

So this leaves us with no clear answer on what factors play a part in LOC inspection rate.

The study then begins to discuss the effectiveness of the reviews, and whether or not slow reviews reveal more “defects” (where a defect is defined as any change to the code that wouldn’t have happened without the review). Because defect data from the Code Collaborator database was not considered wholly reliable (see pages 62 and 63 if you want to know why), 300 reviews were randomly plucked from the original 2500, and the discussions in each one were analyzed to gather the defect statistics.

The study then introduces the concept of “defect density”, which is a ratio of the number of defects detected per 1000 lines of code (referred to henceforth as kLOC).

I’ll skip right to some results:

Our reviews had an average 32 defects per 1000 lines of code.  61% of the reviews uncovered no defects; of the others the defect density ranged evenly between 10 and 130 defects per kLOC.

I’m surprised that 61% of the reviews found no defects.  That’s remarkably high, in my opinion.  True, it’s only a sample of 300 reviews, but still, my instincts were expecting a significantly lower number.

An even more interesting result, is that defects found (and therefore, review effectiveness) dropped off for large amounts of code to review.  The study notes that:

Anything below 200 lines produces a relatively high rate of defects, often several times the average.

So there seems to be a sweet spot.  I wonder if this is a factor in what was causing the surprising high number of defect-less reviews in that sample of 300.  Perhaps many of those reviews were for large sections of code.  Or perhaps they’re for reviews that involve only a single line of code.  The study doesn’t go into this.

What the study does go into, is a general guideline for limiting the time that code reviews take.  Their study noted a stark dropoff in review effectiveness after about an hour.  Totally understandable – I think an hour reviewing someone else’s code would probably be my limit before I started getting distracted.

The study then goes on to suggest that the “slower is better” approach to reviewing code is the right idea:

Reviewers slower than 400 lines per hour were above average in their ability to uncover defects. But when faster than 450 lines/hour the defect density is below average in 87% of the cases.

So already there are some guidelines: try to review something around 200 LOC, take your time, but don’t go over an hour.  This is useful information.

The study then goes into a test of a slight modification of how reviews are performed: before submitting code for review, the author should annotate the code, describing how the changes are structured, why they were coded the way they were coded, etc.  This has a dual-benefit:  it gives reviewers some clues at how to look at the code (while, hopefully, maintaining the distance they need to do a good job), and it also has the added benefit of getting the author to go over their code again to weed out obvious defects.

So, some reviews were carried out in this fashion.  Here’s what they found:

First, for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all! Second, reviews without author preparation comments are all over the map [in terms of defect density] whereas author-prepared reviews do not share that variability.

The study gives two possible conclusions for these results:

  1. Authors gave their code such a thorough look while annotating them, that most defects were eliminated right off the bat.  I’m…skeptical of this conclusion.
  2. Since authors were explaining, or defending their changes, this sabotaged the reviewers ability to do their job effectively.

I find myself believing the second conclusion more, simply from experience:  if somebody is guiding me through things, suddenly I’m in the passenger seat, and I’m less inclined to disagree with a change if their explanation or defense sounds solid.

However, Smart Bear disagrees:

A survey of the reviews in question show the author is being conscientious,  careful, and helpful, and not misleading the reviewer. Often the reviewer will respond or ask a question or open a conversation on
another line of code, demonstrating that he was not dulled by the author’s annotations.

…we believe that requiring preparation will cause anyone to be more careful, rethink their logic, and write better code overall.

I’d like to see their data on this.  In particular, I’d like to see how often reviewers detected defects in lines of code that the author had annotated.  Unfortunately, this data is not provided in the study.

In the last few pages, the study notes that while review size has a detrimental impact on defect density (the number of defects reviewers found per kLOC), there seemed to be a fixed rate on the number of defects found per hour.  While this seems at odds with the original discovery that smaller reviews are more effective, they note:

Although the smaller reviews afforded a few especially high rates, 94% of all reviews had a defect rate under 20 defects per hour regardless of review size.

…the take-home point from Figure 22 is that defect rate is constant across all the reviews regardless of external factors.

So, assume that a reviewer has a steady defect detection rate, but that this rate drops off after about an hour.  Given a small section of code, of course the number of defects detected will be high.  And given a large chunk of code, of course the defects will be more spread out.

It’s the steady defect detection rate that bothers me – you would imagine that the detection rate would depend on the quality of the code and also on the experience of the reviewer.  But I guess, according to this study,  it doesn’t.

And so, the study goes into its conclusions.  I can’t really do much better summarizing the first conclusions for you than they did, so I’ll just regurgitate:

  • LOC under review should be under 200, not to exceed 400. Anything larger overwhelms reviewers and defects are not uncovered.
  • Inspection rates less than 300 LOC/hour result in best defect detection. Rates under 500 are still good; expect to miss significant percentage of defects if faster than that.
  • Authors who prepare the review with annotations and
  • explanations have far fewer defects than those that do not.  We presume the cause to be that authors are forced to self-review the code.
  • Total review time should be less than 60 minutes, not ex- ceed 90. Defect detection rates plummet after that time.
  • Expect defect rates around 15 per hour. Can be higher only with less than 175 LOC under review.
  • Left to their own devices, reviewers’ inspection rate will vary widely, even with similar authors, reviewers, files, and size of the review.

Given these factors, the single best piece of advice we can give is to review between 100 and 300 lines of code at a time and spend 30-60 minutes to review it.  Smaller changes can take less time, but always spend at least 5 minutes, even on a single line of code.

The study then goes into the differences in effectiveness between heavy-duty Fagan-esque code reviews, and the lightweight style of code review that took place at Cisco.  While some of their results from their study exactly match results from studies on heavyweight code reviews (time to spend on a review, when effectiveness drops off), there were some stark differences too.

For example, in the lightweight study, defect detection rate using Smart Bear was 7 times faster than the average rate found across four studies of traditional code review methods.  Sounds like we’re getting into that sales-pitch part…

The study then admits that there was no experiment control – reviews using heavyweight techniques weren’t carried out in parallel with the Code Collaborator study, so comparisons of their effectiveness on the MeetingPlace software have little-to-no data to work with.

In the end, their conclusion is that lightweight code review is just as effective as the traditional methods, while being remarkably faster to boot.  I’d like to see more evidence to back up the comparison on effectiveness, but faster seems more than plausable (Fagan inspections involve very lengthy meetings, so I’ve read).

Smart Bear agrees with my last point on effectiveness comparison, and notes that future study should be conducted where the same set of code is analyzed using both heavy and lightweight methods.

They finish it off with an invitation to software development shops to contact Smart Bear if they’d like to be involved in such a study.

So that’s the gist of it.