I just spent the long weekend in Ottawa and Québec City with my parents and my girlfriend Em.
During the long drive back to Toronto from Québec City, I had plenty of time to think about my GSoC project, and where I want to go with it once GSoC is done.
Here’s what I came up with.
Detach Reviewing Time from Statistics
I think it’s a safe assumption that my reviewing-time extension isn’t going to be the only one to generate useful statistical data.
So why not give extension developers an easy mechanism to display statistical data for their extension?
First, I’m going to extract the reviewing-time recording portion of the extension. Then, RB-Stats (or whatever I end up calling it), will introduce it’s own set of hooks for other extensions to register with. This way, if users want some stats, there will be one place to go to get them. And if an extension developer wants to make some statistics available, a lot of the hard work will already be done for them.
And if an extension has the capability of combining its data with another extensions data to create a new statistic, we’ll let RB-Stats manage all of that business.
The reviewing-time feature of RB-Stats will become an extension on its own, and register its data with RB-Stats. Once RB-Stats and Stopwatch are done, we should be feature equivalent with my demo.
I kind of breezed past this in my demo, but I’m interested in displaying “review karma”. Review karma is the reviews/review-requests ratio.
But I’m not sure karma is the right word. It suggests that a low ratio (many review requests, few reviews) is a bad thing. I’m not so sure that’s true.
Still, I wonder what the impact will be to display review karma? Not just in the RB-Stats statistics view, but next to user names? Will there be an impact on review activity when we display this “reputation” value?
This is a big one.
Most code review tools allow reviewers to register “defects”, “todos” or “problems” with the code up for review. This makes it easier for reviewees to keep track of things to fix, and things that have already been taken care of. It’s also useful in that it helps generate interesting statistics like defect density and defect detection rate (assuming Stopwatch is installed and enabled).
I’m going to tackle this extension as soon as RB-Stats, Stopwatch and Karma are done. At this point, I’m quite confident that the current extension framework can more or less handle this.
Got any more ideas for me? Or maybe an extension wish-list? Let me know.
The idea: given an arbitrary IP and port number, we want to find a way of determining whether or not there is an FTP server, an HTTP server, or a Skype node on the other side. FTP and HTTP are trivial – those protocols essentially announce themselves to the world.
Anyhow, my partner and I have learned a few interesting things about Skype – and in particular, we’ve found a reliable way to determine whether or not Skype is running behind an arbitrary IP and port. Cool.
Fact 1: Skype pretends to be an HTTP server
I’m serious, it does. Using Wireshark, we noticed that both UDP and TCP packets were being sent to one particular port. Pretty funny behavior…so, we took a closer look. And this is what we found. Pop open your Skype client, connect to the network, then use nmap to find the ports that Skype is using:
$>nmap localhost -p10000-50000
Starting Nmap 5.00 ( http://nmap.org ) at 2009-12-01 20:33 EST
Interesting ports on localhost (127.0.0.1):
Not shown: 39999 closed ports
PORT STATE SERVICE
48915/tcp open unknown
Ok, cool – there’s something at 48915, and it looks like it accepts TCP connections. Pop open Telnet, connect to it, and feed it an HTTP request:
$>telnet localhost 48915
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1
HTTP/1.0 404 Not Found
Connection closed by foreign host.
Ok, we got an HTTP response – looks like there’s an HTTP server back there, right?
Wrong. Reconnect, and send it some garbage:
$>telnet localhost 48915
Connected to localhost.
Escape character is '^]'.
See all of those funny characters down at the bottom? That’s what I got back. In the words of Obi-Wan Kenobi…that’s no HTTP server…it’s a space station (Skype node).
So we’ve learned something here – Skype opens a port, and “spoofs” an HTTP server. We can easily check for this – just write a script that connects to a port, spews some garbage, and check to see if we got binary garbage back.
It’s so easy, that someone else has already done it. Remember that nmap tool we used earlier? Somebody over in that camp wrote a script for the Nmap Scripting Engine that runs this exact analysis on some ip/port. Don’t believe me? Read the script yourself! We stumbled upon that script while trying to figure out what Skype was doing with the spoofed HTTP server.
And sure enough:
$>nmap localhost -p48915 --script skype.nse
Starting Nmap 5.00 ( http://nmap.org ) at 2009-12-01 20:45 EST
Interesting ports on localhost (127.0.0.1):
PORT STATE SERVICE
48915/tcp open skype2
Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds
Hmph. So much for cutting edge, never-been-done research. Go figure.
Fact 2: Given some UDP packets, Skype echos back a predictable pattern
For this part, we’re pretty sure no one else has tried this.
While connected to Skype, we recorded some packets with tcpdump. We wrote a script that loaded up those packets, and could “replay” the packet payloads to an arbitrary IP and port.
So, we played some packets against an IP/port with Skype behind it. Most of the time, we got TCP packets with RST flags (which is TCP’s way of telling us to “shut yer trap”). But wayyyy down in the middle, there was a section of UDP packets that actually got a response:
192.168.0.19 was the computer we were playing the packets from, and 192.168.0.14 was the computer with Skype running on it. See those UDP packets that are getting echoed back? That’s the interesting part…instead of just shutting us down with RST’s, Skype appears to be saying something back.
So, is there a pattern in all of this? Actually yes. We isolated 4 of those UDP packets, and repeatedly fired them at the same IP/Port on the computer running Skype, and we found a pattern.
The pattern: the first two bytes that are sent in our UDP packets are echo’d back to us in the first two bytes of the UDP packets that come back.
So, for example, one UDP payload we sent looked like this:
92 40 02 a1 66 65 ea 0d 8c 82 c3 0c 27 cd c5 e7
4e 78 fe a1 50 a6
And we got back:
92 40 17 c0 a8 00 13 74 a0 41 f0
See that common 92 40? Bingo. 😉
And it’s pretty consistent – if we repeat the same UDP packet, we get (almost) the same response.
92 40 67 c0 a80013 11 00 10 4f
And if we repeat again…
92 40 37 c0 a8 00 13 68 08 43 3a
92, 40, and c0, a8, 00, 13. Nice! Looks like a fingerprint to me!
Except, remember, we already found a way of determining whether or not Skype was running behind a given IP/port. This last finding was just bonus. My partner and I aren’t sure if our instructor is going to let us stay with this topic, seeing as how it’s pretty much been solved by other people before. We’ve only got 2 weeks before this project is due, so…if we get another project, let’s hope it’s relatively simple. Push come to shove, we could always try to fingerprint a different protocol…maybe BitTorrent clients.
Either way, working on this stuff has been pretty cool…and it let me try out some pretty neat tools that are usually reserved for the people withcolouredhats (and no, I didn’t mean Red Hat):
nmap: port scanner that can also do service/os fingerprinting
Scapy: sculpt, gut, spoof, manipulate, and send packets – the power of C, with the simplicity of Python! We used Scapy as a library while writing our scripts. Lots of potential with this tool. Feel like poisoning an ARP cache? Scapy is for you!
I’m taking a Computer Networks course this semester, and for my final project, my partner and I are trying to create signatures for FTP, HTTP, and Skype packets.
The big idea: we want to create some signatures, and then “replay” those signatures against some arbitrary IP and port. If we get a response, we analyze the response to see if it matches what we expect from the signature. If it matches, chances are we’ve determined what kind of server is behind that IP/Port.
FTP and HTTP are the trivial ones. Skype is going to be quite a bit harder.
Anyhow, here is what I’ve found out about FTP…
FTP runs over a TCP connection, so if you’ve got Telnet, then you’ve got a basic FTP client. Traditionally, FTP servers run on port 21 – but really you could put one on whichever port you feel like.
First, I’ll connect to the FTP server with Telnet, like so:
mike@faceplant-linux:~$ telnet ftp.mozilla.org 21
Here’s what comes back:
Connected to dm-ftp01.mozilla.org.
Escape character is '^]'.
220- ftp.mozilla.org / archive.mozilla.org - files are in /pub/mozilla.org
220- Notice: This server is the only place to obtain nightly builds and needs to
220- remain available to developers and testers. High bandwidth servers that
220- contain the public release files are available at ftp://releases.mozilla.org/
220- If you need to link to a public release, please link to the release server,
220- not here. Thanks!
220- Attempts to download high traffic release files from this server will get a
220- "550 Permission denied." response.
If I type in anything and press RETURN, the server responds with: 530 Please login with USER and PASS.
Since I don’t have an account, I’ll just use the basic anonymous one:
The server responds back with:
331 Please specify the password.
I don’t have a password, so I’ll just try a blank one…
and blam, I get a ton of stuff back:
230- ftp.mozilla.org / archive.mozilla.org - files are in /pub/mozilla.org
230- Notice: This server is the only place to obtain nightly builds and needs to
230- remain available to developers and testers. High bandwidth servers that
230- contain the public release files are available at ftp://releases.mozilla.org/
230- If you need to link to a public release, please link to the release server,
230- not here. Thanks!
230- Attempts to download high traffic release files from this server will get a
230- "550 Permission denied." response.
230 Login successful.
Hey alright, I’m in! Er…where exactly am I, though? I type in PWD, and the server responds with “/”. So I’m in the root. Nice.
So what’s in the root directory, anyhow? I type in LIST. Here’s what I get back:
425 Use PORT or PASV first.
And here’s where it gets interesting. This Telnet session I’ve got here is like a control window. But if I want any actual data from the server, I’m going to need to either open up one of my ports (and do some port-forwarding on my router) to receive it (PORT), or connect to another port that the FTP server can pipe data through (with PASV).
I’d rather not go through all of the trouble of port-forwarding, so I’m going to choose the latter. I type in PASV. The server responds with:
227 Entering Passive Mode (63,245,208,138,225,55)
So what does that big string of numbers mean? The first 4 are the IP address I’m to connect to (184.108.40.206). The last two tell me what PORT to connect to. The formula to determine the port number is N1*256 + N2. N1, in this case, is 225. N2 is 55. So 225*256 + 55 is 57655.
So I open another Telnet in a separate window, connect to 220.127.116.11 on port 57655, and get….
Yep, just a blank. I’ve made the connection, but I haven’t asked for any data, so there’s nothing for the connection to say.
However, if I type LIST again in the command window, I get
150 Here comes the directory listing.
226 Directory send OK.
pumped into my data window. Notice that the connection closed in the data window. That means that, for every bit of data I want, I either need to redo the whole PASV thing, or supply a PORT that the server can connect to. Bleh.
Let’s see what else I can do. I type in “CWD pub” to change to the pub directory. Using PASV and LIST, I get the following from another data window:
Nice. Alright, now let’s see if I can download one of those files. I’m going to try to download README. Using PASV, I create a new data window, and then I type:
And, after a little wait, my data window gets:
Welcome to ftp.mozilla.org!
This is the main distribution point of software and developer tools
related to the Mozilla project. For more information, see our home
page (http://www.mozilla.org/) Go here to download Netscape Communicator:
A list of ftp.mozilla.org's mirror sites can be found at:
This site contains source code that is subject to the U.S. Export
Administration Regulations and other U.S. law, and may not be exported
or re-exported to certain countries (currently Afghanistan (Taliban
controlled areas), Cuba, Iran, Iraq, Libya, North Korea, Sudan and
Syria) or to persons or entities prohibited from receiving U.S.
exports (including Denied Parties, entities on the Bureau of Export
Administration Entity List, and Specially Designated Nationals).
If you plan to mirror our site read our crypto FAQ. Send mail to
firstname.lastname@example.org to be added to our mirrors list.
We do not guarantee that any source code or executable code
available from the mozilla.org domain is Year 2000 compliant.
Connection closed by foreign host.
Awesome! I think I have enough information to come up with some kind of signature.
What, you think I figured all this stuff out alone? No way – I had some help:
Recently, I came to the realization that I’ve been writing computer programs in one form or another since I was about 6 or 7 years old.
Along the way, I’ve had plenty of people to influence the way I think about code, and how I write it. Sure, there have been plenty of textbooks along the way too, but I want to give some thanks to the people who have directly affected my abilities to do what I do.
And what better way of doing that then by listing them?
A Chronological List of People Who Have Influenced My Coding
My parents, for bringing home our first family computer. It was an 8088XT IBM Clone – no hard drive, 640K of RAM, dual 5 1/4 floppies…it was awesome. This is the computer I started coding on – but I couldn’t have started without…
My Uncle Mark and my Aunt Soo. Both have degrees in Computer Science from the University of Waterloo (that’s where they met). My recollection is pretty vague, but I’m pretty sure that a lot of the programming texts in my house (a big blue QuickBasic manual comes to mind) surely didn’t come from my parents – must have been those two. With the book in one hand, and the 8088 in the other, I cranked out stupid little programs, little text adventure games, quizzes, etc.
The online QB community from the late 1990’s to the early 2000’s. When my family got online, I soon found myself hanging out at NeoZones, in the #quickbasic IRC channel on EFNet… actually, a lot of crazy stuff was being done with QuickBasic back then – I remember when DirectQB came out, and somebody was able to code a raytracer…in BASIC. It was awesome. I’d say these were my foundation years, when I learned all of my programming fundamentals.
My friends Nick Braun, Joel Beck, and Doug McQuiggan – these three guys and I used to come up with crazy ideas for games, and I’d try to program them. I’d come home from school, and pound out code for a computer game for a few hours in the basement. More often then not, these projects would simply be abandoned, but still, a lot was learned here.
After highschool, I went into Electrical and Computer Engineering at the University of Toronto. I didn’t do too well at the Electrical bits, but I could handle myself at the Computer bits. I learned OOP, Java, and basic design patterns from Prof. James McLean.
I also learned a great deal from Prof. McLean’s course text – Introduction to Computer Science Using Java by Prof. John Carter. I know I said I wasn’t going to mention textbooks, but I also got taught Discrete Mathematics from Prof. Carter, so I thought I’d toss him in too.
My second (and last) semester in ECE had me taking Programming Fundamentals with Prof. Tarak Abdelrahman. I learned basic C++ from Prof. Abdelrahman, and how to deal with large systems of code.
After my move to the Arts & Science Faculty, I took my first Computer Science course with Dr. Jim Clarke. I learned about Unit Testing, and more design patterns. I also eventually learned some basic Python from him, but I think it was in another course.
I took CSC258 with Prof. Eric Hehner, and learned about the structure of computer processors. Physically, this was a low-level as I’d ever gotten to computers. I was familiar with writing Assembly from my QB days, but Prof. Hehner’s Opcode exercises were really quite challenging – in a pleasant way. Also, check out his concept of Quote Notation…
After that year, I spent the first of three summers working for the District School Board of Niagara. Ken Pidgen was my manager, Mila Shostak was my supervisor. Ken gave me incredible freedom to work, and soon I was developing web applications, as opposed to just fixing up department websites (as I originally thought I would be doing). Mila gave me guidance, and showed me how to use CSS to style a website. She also got me started using PHP and MySQL to create basic web applications.
While working at the Board, I had the pleasure of sitting across from Jong Lee. Jong and I would bounce ideas off of one another when we’d get stuck on a programming problem. He was very experienced, and I learned lots of practical programming techniques from him.
Michael Langlois and Ken Redekop acted as my clients at the Board, and always gave me interesting jobs and challenges to perform.Everyone at the Board was always very positive with me, and I’ll always be grateful that they took a newbie undergrad under their wing! I was given a ridiculous amount of freedom at the Board, and was allowed to experiment with various technologies to get the job done. Through my three summers there, I learned bits about Rails, CakePHP, MVC, network security, how to deploy an application remotely, how to run a local server, how to develop locally and post to remote, ORM, Flash, web security…so many things. The list is huge.
Karen Reid and Greg Wilson have been the latest influences on me. The MarkUs Project was the first project I’ve ever worked on with a team. It was my first time seriously using version control, my first time using a project management portal (Dr. Project), my first time learning Ruby, and my first time working on an open source project. I’ve also learned plenty about time management, people, the business of software, and how to get things done. Again, I’ve been given lots of freedom to learn, experiment, and hone my craft.
Anyhow, these are the people who come to mind. I might add to this list if I remember anyone else.
But in the mean time, for the people listed above: thank you.
In their own words, Freshbooks is an “online invoicing, time tracking and expense service”. According to them, they have 800, 000+ users, and they’re hiring.
Today, they were pleased to announce a “top secret reveal“. I, personally, was intrigued – learning something top secret is awesome: it makes me feel like part of a special group of trusted people.
So, imagine my disappointment when the presenter tells me flat out that this “top secret” information was announced on their blog more than two weeks ago. Not two minutes into their presentation, and I’ve already been lied to. Bad start.
I’m going to bypass talking about the laptop problems, the poor presentation pacing, or the repeated calls for “who ordered the chicken fingers?” from the kitchen. I want to talk about this not-so-top-secret thing that Freshbooks was revealing.
Basically, they were revealing that as well as being able to bill clients outside of the Freshbooks network with their service, they are going to allow Freshbooks users to bill other Freshbooks users…
Then they showed a 1 minute video, where a bunch of blue nodes were displayed, with some connections here and there. As the video progressed, more nodes were added, more networks were formed….ahhh, I see what you did there. You showed me that by allowing networks to form on Freshbooks, networks…will form.
If you Freshbooks guys are reading this, I’m really not trying to be a dick here – I’m sure your service is awesome, and who knows: I might use it someday. But your presentation tonight didn’t really tell me anything compelling, and the fact that it was initially wrapped in this awesome package of “top secret reveal” didn’t help your case.
It’s a bit like unwrapping a big present labelled “awesome secret gift!”, only to find a tiny note in bad handwriting inside, telling you something you already know. Plus, you find out that everybody else already knew what was inside the box anyways. Maybe I’m being picky. Who knows.
My main beef, was that I didn’t get a demo. I came to DemoCamp to see demos.
I want to see things like this:
Both videos were on the Freshbooks blog post. One video rapidly gives me lots of information that entices me to investigate their service. The other awkwardly tells me almost nothing.
Bottom line: interesting service. Failure to deliver top secret reveal. Weak demo.
This wasn’t really a presentation – more of a time filler while technical glitches were ironed out on presentation laptops. Basically, it was an announcement that a Ruby job fair was coming up in Toronto.
What was interesting, was the notion that the job fair would follow the model of classic “science fair” poster board presentations. There would be booths, with poster boards up, and people to talk to. No laptops. No iPhones. Just people talking about Ruby, what they like to do with it, and why they’re passionate about it.
WhereCloud presents Reportage, a Twitter client for the iPhone.
A good presentation from Dufort. He kept it light, and brisk, and showed off the application instead of talking about it. That’s key.
According to Dufort, Reportage is “a radical way of browsing Twitter”. He mentioned how most Twitter clients fall into the trap of porting the user experience of Twitter to their clients.
What is interesting, is that Reportage introduces a new metaphor on top of Twitter: channels, or radio stations. At the bottom of the client is a “tuner”, which displays the pictures of who you’re following on Twitter horizontally. There is a red needle in the center of the tuner that shows you who you currently have selected. Simply flick your finger, a la iPhone style, and you can change channels to whomever you want to get Tweets from. Cool, novel metaphor, if not exactly radical.
Instead of dealing with the massive stream of updates of who you’re following, Reportage simply shows the display pictures on the main screen, in the order that they were last updated.
Beyond the interface, what was most compelling was two features:
the ability to add Twitter users to a “favourites” list, that are distinct from the main stream.
the ability to temporarily mute users, to avoid embaressing temporary un-follows.
These are two excellent features that the Twitter web-interface sorely lacks. Sure, there might be other clients who implement the same features, but Reportage seems to do it very intuitively and gracefully.
Bottom line: I liked the app, I liked the presentation. If I had an iPhone, I might buy it. Why the hell not – it’s apparently going to have an intro price of $2.99 when it goes up on the App Store in a couple of days.
Flash Based 3D FPS
Presenter: Greg Thomson
This was a tech demo for a 3d multiplayer first person shooter, which looked a bit like the Interplay classic, Descent.
Written entirely in Flash.
My mind was boggling when I saw this go – Flash, Flash, was cranking out 45 frames per second as it rendered the textured polygons in the demo map. It was fast, it was smooth, it was impressive.
Unfortunately, then it got a little boring for me. Once the initial shock/novelty of seeing Flash do something like this wore off, there really wasn’t much more to the presentation. I got to watch Thomson navigate around the screen for a while, and fire some lasers/missles, but that was it. There wasn’t anyone there to play against. The map was kind of bare.
The demo picked up again when Thomson zoomed out the camera so that it was directly over the player, and we could see in real-time how the program was intelligently choosing what to render, in order to save as many cycles as possible.
There was also talk about the multiplayer client, and how they had to write their own in order to deal with 20 updates per second. That was cool, but it would have been nice to see it in action.
Granted, the presenter told us that the technology had recently been sold to someone else, and that he was demoing it on their behalf. Also, he wasn’t there for investment, wasn’t hiring, or looking for contributers/users. He was doing a tech demo, and that’s what he delivered.
Bottom line: AMAZING product – never thought I’d see Flash do that. It would have been nice to see some multiplayer action, but you can’t always get what you want.
WineAlign is a “community based service for reviewing, sharing, and discovering wine”. It promises to help users find “the right wine for the right price” based on user and professional reviews.
An interesting concept.
Growing up in Grimsby, I was surrounded by vineyards and wineries everywhere. Wine tours are arguably one of the primary tourist attractions of the Niagara region (along with that waterfall thing). In a wine tour, people drive from winery to winery, taste testing various wines, trying to find one that suits their palette/mood/occasion.
So right off the bat, I’m a bit skeptical of WineAlign – the people I know who are into wine go by taste. How can that be conveyed through a web application?
It seems that WineAlign is hoping you can discover the perfect wine based on user submitted reviews, along with professional reviews, and make up your own mind based on that information, plus on its local availability and price. OK, that’s fair. I really don’t expect a web app to waft the aromatic bouquet of a 1984 Merlot through the screen.
What was most interesting about the presentation wasn’t actually the service (I don’t really drink wine, or alcohol for that matter). What interested me the most was his experience that “1 blog entry was greater than $10,000 of public relations”. Traditional PR didn’t work for them – they had to rely on advertising their service through the ether of the social web to get where they are, and it really paid off.
Bottom line: cool service, nice design. I probably won’t be a customer anytime soon, but that’s because wine isn’t really my bag. Relatively decent presentation despite technical glitches (being unable to see half of the screen was kind of a bummer), though the use of the phrase “critical mass” became so frequent that I started to forget what the term actually meant.
This ignite presentation was essentially a rushed retelling of this story from Adam Goucher’s blog. I’ve got to hand it to him – it’s not easy to tell a story in front of an audience when you’re constrained by the 15-second auto-advancing slides that define an ignite presentation. So, big kudos.
Anyhow, if you read his original story, you’ll get the gist of his presentation.
The bottom line is that he tried to deliver a good message: there are many ways of doing things, and there is not necessarily one right way. Or, in his words, “everyone is good in their own way!”.
According to Joey DeVilla, Goucher’s presentation “pulled down the pants of [his] mind”.
Dinner was served. Again, strangely, the DemoCamp audience was served highly eclectic pizzas: feta cheese, mushrooms, fried onions on dough, with a massive slice of ham on top. Anyhow, no complaints – I dug the pizza.
Through various microphone noises, the audience was signaled to sit back down again for the next round of demos and ignite presentations.
This effort was sabotaged almost immediately with network problems. So the audience was held in rapt attention as MC’s Jay Goldman, David Crow, and Joey DeVilla attempted to kill time. Some relatively pornographic jokes were rapidly fired off by DeVilla, while Crow and Goldman bravely grimaced and glanced towards their lawyer.
Toronto WebTV Meeting
While network problems were being solved, there were some announcements put out, including a “Toronto WebTV Meeting” for people who are interested in internet broadcasting, or web video. The meeting is at 7PM-10PM tomorrow (May 26, 2009) at 692 Yonge St at a restaurant called the Arrabiata.
Another announcement was about “ExtremeU”, a 12 week startup school that was looking for people to enroll. We were told to visit http://www.extremevp.com for more information, but that website doesn’t seem to contain much more than a logo and an email address. How disappointing.
DeVilla took advantage of the ensuing lapse of announcements to propose an idea for an iPhone app: Sausage Party. The application would attempt to gauge the current male/female ratios of the clubs nearest to you. Believe it or not, in time, that idea might sell. As of yet, I don’t think enough GPS smartphones are out there to make such an application that accurate, but in a few years…who knows.
In a nutshell: TicketTrunk wants to be the TicketMaster of the little guy. It wants to make ticketing super easy, so that your grandmother could set up an event, and sell tickets, without too much of a hassle. TicketTrunk also wants to stop charging ticket buyers for service fees, and instead place flat fees per sold ticket on the ticket seller.
Ok, cool idea. I know some theatre people who might be interested in becoming users.
The only problem with the presentation: it wasn’t a demo. It wasn’t an ignite. It was a talk. 5 minutes crawled by for me as Dhalla described how his application worked. A single slide showing the TicketTrunk home page was all we got.
Anyhow, TicketTrunk might be something I recommend to my theatre friends. Their $1 flat fee to ticket sellers per ticket sold might be a bit of a problem for non-profits trying to keep ticket prices down, while recouping costs…
Bottom line: a service worth investigating, if anything just to see what Dhalla was talking about. Non-existent demo. Weak presentation.
An open-source Twitter client for Windows, built on top of the Microsoft Presentation Framework. They’re looking for contributers and feedback.
Unfortunately, this demo’s thunder was stolen almost completely by Reportage. In comparison, digiTweet’s interface looked a bit cluttered. There was an interesting UI concept of adding people you’re following to different categories, and then having those categories be colour coded when viewing your stream of Tweets. Not exactly groundbreaking, but it distinguished it from Reportage.
Another distinguishing feature was that digiTweet is an open source project. Kulasingam was very open and inviting to everyone to come and contribute, and give feedback. That’s always good to see. Kudos.
Bottom line: product needs some polish (though granted, it’s only a month old!), but it’s got some interesting ideas. Pretty good presentation.
Rypple, to put it in their own words, lets users get “quick, specific and private feedback from trusted advisers and co-workers.” It’s an anonymous feedback system, with some pretty sophisticated metrics. It’s used by average home users, as well as large companies such as Cisco Systems, Rogers, and General Electric. Rypple was also recently featured in The Economist magazine.
So, Toronto startup-wise, Rypple is doing pretty well for itself.
Presentation-wise, these guys know what they’re doing. They jump right to it, and show off their app like pros. They’ve clearly done this a bunch of times (and they’ve probably gotten tons of Rypple feedback on their presentations!).
I don’t know what else to say about Rypple. It was a solid presentation for a solid service. What was most surprising to me, was finding out that Rypple is developed using GWT. That caught me by surprise.
Grigorik gave a good talk on how content published on the web has a half-life of about 50 minutes. He said that this is driving publishers insane, because the social web produces “more content in a day than a major publisher produces in a year”.
He also said there is data to show that social networking is a more popular Internet pastime than pornography. You can imagine the gasps, snickers, and muttered jokes.
Besides StumbleUpon, time is a critical element for social web content. What’s on Digg right now, won’t necessarily be there an hour. Probably less.
Grigorik stressed that since time is such an important factor in getting your content out there in the social web, it is necessary to have real-time metrics to give you feedback on how your content is doing. He said that the old model, of looking at metrics for past posts, is not good enough – in order to boost the popularity of content, you must engage with your audience.
So, for example, if I finish this blog post, it might get mentioned here or there on Twitter, other blogs, etc. My WordPress analytics might not tell me much about that. But that information, what other people are saying about my article, is important. A service is needed to help find where other people are talking about you, so that you can engage with them, and keep your content relevent.
Interesting presentation from this guy. It was rapid fire, and I couldn’t always tell when he was joking or not. He was self-deprecating the entire time, which was sort of endearing, but it clouded his overall message.
Which was this: in his experience, good things happen when you stop trying to get press. Fire your public relations team. Just go through the social media ether! Twitter is the key! One day, he put up a Twitter post about his company, and 6 hours later he was on TV. Go figure.
Other tips included “find a style of converation that works for you”, and “talk to the community, and let the press listen”.
BumpTop is a new take on the desktop metaphor of modern operating systems. Basically, it makes your desktop more like a real desktop. Items can be stacked. They have weight. They can be thrown around. You can navigate around in your desktop, and look closely at things. It’s actually really cool.
What’s also really cool, is that this guy showed this thing off at DemoCamp a while back. Push came to shove, and eventually, he did a TED talk about it. Wow. Talk about snowball effect.
While it’s a cool idea, I don’t think I’ll be installing it anytime soon. I like my desktop just the way it is for now. Still, I always like seeing new, wild ideas.
Also, this guy didn’t know he was demoing until a few moments before he went up. Remarkably, the demo/presentation went really smoothly.
Bottom line: neat idea, neat product, but not something I’ll rush out and install right away. Great presentation. This guy is clearly going places.
All in all, an entertaining night. Good people, good food, and some pretty interesting presentations.
Feel free to post comments, complaints, corrections, support, corroborations, etc.