So a few days ago, my official grades for my M. Sc. rolled in. That same day, I went to the Bahen Centre, turned in my desk keys, got my keycard authorization revoked, and scheduled my computer for erasure.
It felt like some pretty big steps. There was a palpable sense of finality. I was out. It was over.
The University has played a big role in my development, and despite all of my moaning and complaining over the years, I’m glad I went, and I’d do it again.
But not right now.
Graduate school almost didn’t happen for me, and I have two very important people to thank for making that happen: Karen Reid and Greg Wilson.
I still fondly remember when you cornered me during that codesprint in 2009, and convinced me to try graduate school. I don’t regret it. It was the right decision. So thank you both so much for convincing me, and giving me the chance, and thank you Greg for supervising, and guiding me through.
I learned lots. I had fun. 🙂
Shakespeare wrote that brevity is the soul of wit. Well, I
When we started using ReviewBoard with MarkUs a few months back, all of a sudden, commits to the repository seemed to slow down: we would take more time cleaning up our code, and polishing it for others to see.
Our commits were usually quite large too. This is because we were all working on different sections of the code, and we wanted to commit stuff that “instantly worked” and was “instantly perfect”. So after days of silence, 1000 lines of code would suddenly go up for review…and as Jason Cohen can probably tell you, the number of defects found during review decreases as the amount of code to look at increases. So, the reviewer would skip through 1000 lines, assume most of it was OK, and give it the Ship It.
Yeah, I know. Awful. I wonder if this is a standard newbie mistake for student groups just starting out with code review…
So, study idea:
Have two separate groups working on some assignment. Have Group 1 commit to their repository without any review process. Have Group 2 do pre-commit reviews using a tool like ReviewBoard.
Now check out the size, frequency, and readability of the repository diffs of each group. Might generate some interesting data.
Anyhow, in our defence, we seem to have calmed down on MarkUs. Diffs up for review are pretty small, and get posted relatively frequently. Using ReviewBoard on MarkUs has made me a believer. Testify!
(Read this if you have no idea what I’m talking about)
Why not go right for the throat?
How about I just round up all of the instructors who teach courses with group assignments, and ask them why code review tools aren’t provided or encouraged. Or maybe they’ve tried, but they ran into a stumbling block. Or perhaps the whole idea of using code review tools flies in the face of some important teaching method.
I won’t know until I ask. So why not just ask?
It might not be a quick, sharp, clever scientific study, but it sure might generate some interesting material for examination.
So my supervisor Greg Wilson has challenged myself and fellow grad student Zuzel to try to come up with one idea for a study per day until our next meeting.
I’ve been researching the use of code reviews in CS undergraduate classes, and this is what my ideas will center around.
My first idea is a knock-off of one that Jorge Aranda performed a while back:
Take a group of students, and tell them that they will all be working together on an assignment. Give them a spec for their assignment. Get a time estimate in hours as to how long they think they will need to complete the assignment.
Take a second group of students (who were not present when the first group was around), and tell them that they will be working together on an assignment. Give them the same spec that the first group had. Tell them that they will need to use a peer code review tool like ReviewBoard for every commit. Get a time estimate on how long they will need to complete the assignment.
Compare the two sets of estimates.
I predict that the second set will have a higher range of values. I wouldn’t find that surprising. I’m more interested in how much higher the estimates are.
I remember my first reaction to using ReviewBoard on MarkUs: This’ll slow us down. We don’t have time for this.
I’m curious if others feel the same way.
My graduate supervisor has asked me to look into the following problem:
Code reviews. They can help make our software better. But how come I didn’t learn about them, or perform them in my undergrad courses? Why aren’t they taught as part of the software engineering lifecycle right from the get-go? I learn about version control, but why not peer code review? Has it been tried in the academic setting? If so, why hasn’t it succeeded and become part of the general CS curriculum? If it hasn’t been tried, why not? What’s the hold up? What’s the problem?
I’m to dive into academic papers regarding the above, and blog about what I find out.