Tag Archives: classroom

Limits of the i>Clicker

This post is from an idea that Karen Reid has posed to me for potential research…

i>Clickers have been around for a few years.  I’ve never had to buy or use one in any of my classes, but it seems like more and more courses are starting to find it useful.

So what is this i>Clicker thing?

An i>Clicker is a handheld wireless device that essentially brings the “ask the audience” portion from Who Wants to Be A Millionaire, into the classroom.

So, each student buys his or her own personal i>Clicker, and registers it for any classes that require it.  During one of those classes, the instructor could throw up a slide that quizzes the students on what was just taught.  Students key in their responses on the i>Clicker, and the results are then displayed up on the screen.

From what I can tell, the idea is that the i>Clicker should encourage more class participation because:

  1. Students answer simultaneously – so instead of instructors choosing a raised hand from the class, the entire class gets polled
  2. Results displayed to the class can be anonymous.  So, instead of remaining silent among your peers out of fear of being publicly wrong, all students can submit an answer, and get feedback that helps them learn

The i>Clicker can also be used to poll students and give the instructor feedback. For example, an instructor could put up a slide that says “How was my lecture today?” and get some anonymous feedback there.

Well, not exactly anonymous.  See, the instructor has the ability to see who submitted what, and when…so if you repeatedly answer quiz questions incorrectly, the instructor can probably detect that you’re misunderstanding, guessing, or just don’t care.

Anyhow, that’s the basic idea behind the i>Clicker.  It’s used in a few classes here at UofT, and I know people who’ve had to purchase ($35+) and use them.

Click here to visit the i>Clicker website

The Limits of the i>Clicker

The amount of data that students can provide through the i>Clicker is pretty limited.  Here’s a photo of a the device:

The iClicker.

Ta da.

Students have a maximum of 5 choices that they can make while being polled.  Instructors are restricted to multiple-choice questions.

Hm.  Can’t we do any better?

Turning Smartphones into an i>Clicking Device

Wifi-enabled “smartphones” are becoming part of everyday life.  It seems like I can’t walk half a block without seeing somebody whip out their iPhone and do something really freakin’ cool with it:

So it’s not really a far fetched idea to imagine that, some day, every student will possess one of these things.

Certainly, something like the iPhone could act as a multiple choice interface.  But is there a way of turning some of that cool touch/gesture/accelerometer stuff into useful polling feedback for students and instructors?

Some Ideas

  1. Instructor puts a graphic up on the board, and asks the students “what’s wrong with this picture?”.  Students look at the picture on their SmartPhone, and use their finger to indicate the portion of the picture that they’re interested in.  After a few seconds, the instructor displays the results – which is a semi-transparent overlay on the image, showing all of the areas that students indicated.  Areas that are of interest to more students are emphasized.I can see this being useful for code reading classes.  The instructor splats a piece of code up on the screen, and asks the students to indicate where the bug is.
  2. Students were asked to mock-up a paper prototype for an interface that they are designing.  The instructor asks all of the students to take a picture of their paper prototype, and submit it on their SmartPhone.  The instructor is then able to put all of the photos up on the screen for discussion.  This could nicely tie in with idea #1.
  3. Students are polled to see how many years they have been programming for.  Students simply type in the number of years, and submit it.  While the i>Clicker restricts the answer to such a question to 5 ranges, the SmartPhone submits the actual answer.  Once collected, the submissions could be displayed on a histogram to give students an accurate impression of the level of experience in the classroom.

Any other ideas?

Exploring Peer Review in the Computer Science Classroom: Part 2 (Exciting Conclusion)

Now, where was I?

Oh yeah, I was reviewing this paper, and getting right to the good part – the experiment method, data, and results.

I hate to disappoint you, but this section starts as a bit of a downer:

…Unfortunately, the data collected was spotty, to say the least, and was not linked together well enough to support a reasonable, detailed analysis to meet our goals.

Yikes.  Oh well, let’s see what happened…

One issue we discovered was that our design was too broad to give definitive results.  We did, however, have enough data to allow us to narrow down the focus for a second study.  From the analysis of one class, we were able to identify two interesting areas for further research.  The type of review appears [sic] have a significant effect on the length and focus of the review.  We also found evidence that students reviewed some of the concepts differently than they did others.  Both of these findings should be explored in the future work.

Good.  At least there’s some groundwork for somebody to do a future study.

Reading on, I’m impressed with the scope with which the writers performed their experiment.  For example, instead of just experimenting on a single classroom, these people used experiments across 8 classrooms, and each classroom had a number of participants ranging from 10 to 60.  Ambitious.  Nice.

While their experiment may have been too broad to get the results they were looking for, it certainly has more authority than the studies that just used a single, small class.

However, maybe I spoke too soon:

The data collected for this study did not occur as smoothly as we would have wished. … As a result, we were able to collect a large amount of data but it is not as complete as we would have liked.”

Hm.  Doesn’t sound that great.

Apparently, data was supposed to be collected from surveys, review rubrics, and questionnaires, but it looks like the questionnaire data kind of fell through:

The number of responses to the questionnaires was low.  Three classes had no post-questionnaire responses at all.  Of those classes that did have responses for the second questionnaire, the number was too small to lend any confidence to a statistical analysis…

Review rubric data was a little more interesting – 996 completed rubrics were collected from 299 reviewers from the 8 classes over the course of the study.  Nice.  But, again, it wasn’t all flowers and hugs:

While the amount of data was large, it is also incomplete.  Of those classes which were intended to have both training and a peer reviews [sic], three of them were not able to complete the second review assignment and, so, have nothing to be compared to.  Two of the other classes have only a moderate number of participants (15-20) which is not as high as we would have liked for our statistical analysis.  One class provided no viable information at all…

Things just seem to get worse and worse for these guys.

And, not to kick them while they’re down, but their writing seems to get worse and worse too.  I’m noticing more typos and tense errors as I go along.  Maybe the stress was getting to them…

So, what did they find?  Drum roll, please…

Final Results

Surprise!  It’s inconclusive!

It sounds like they found more questions than answers…

Is it type or order (or both) that is causing the effects the [sic] training and peer review?

It took me a little while to figure out what they were asking here.  Apparently, before engaging in peer review exercises, students would critique material that was provided by the instructor.  It seems that the training review step, on average, generated more verbose comments (though, not necessarily relevant comments) than the peer review step.  But the peer review step tended to produce more relevant comments.  The experimenters have a couple of theories on why that is:

  1. Social pressure could cause students to be less verbose on their critiques on one another.
  2. Increased learning after the training exercise could cause the students to be more succinct and precise in their reviews
  3. Students were more engaged in the training exercise, and felt more inclined to be more verbose (even though they were less precise)

Unfortunately, the experimenters note, a lot rests on the motivation and attitude of the students, which wasn’t really considered or measured during the design of the experiment.  So, to break it down simple, the type of review (training vs peer) and the order (training first, then peer review), caused some numbers to change in their tables…but they don’t really know why.

They had questions on other things too…

Why are there differences in how concepts are reviewed?

  • Are there differences in conceptual difficulty?
  • Do the reviews improve student learning of these concepts?

The three CS topics that were focused on during these courses were the OOP concepts of Abstraction, Decomposition, and Encapsulation.  The experimenters also theorized that successful reviews go through the following steps:

  1. Analysis
  2. Evaluation
  3. Explanation
  4. Revision

(Unspecified) variations in how the students used these steps, and how verbose they were at each step, caught the experimenters’ attention.  They wonder if this has something to do with the conceptual difficulty of each topic, or if the reviews were effecting their understanding over time.

More questions they brought up…

Is reviewing an engaging and interesting task in computer science?

Very good question.  The experimenters noted that they had no measure of student’s interest, feeling, and engagement in the reviewing process.  They note that it is important to look at these attitudes over time for improvements or problems.

Are there significant learning benefits to reviewing in the early computer science curriculum as compared to other, common homework/lab exercises?

I’ll let them explain this one:

While we have identified a number of potential benefits from reviewing, we have not shown that it is better than or as good as what we currently do.  We require some sort of baseline to compare our efforts to.  We need a control group in our experiments in order to judge effectiveness.

And then the paper pretty much ends.

Where To Go From Here

The authors do a good job of lining up some interesting questions towards the end.  I guess this is how you salvage an experiment that didn’t go as planned – find the deeper questions, and see if somebody else can do a better job.

Or maybe, if you give the authors enough time, they’ll try to do the better job themselves. I think I’ve found the next paper to review.