Wednesday, October 28, 2009

Abstract Analyzer

Here's a project I've been plugging away at:



(Big version here, or use the tiny full screen button in the bottom right of the movie)


Gem: http://gemcutter.org/gems/abstract_analyzer
Source: http://github.com/markmcspadden/abstract-analyzer

Saturday, October 24, 2009

Rolling with Raindrop

Today Mozilla announced a message aggregation project they've been working on called Raindrop. Now, I find myself slower and slower to fall into the hype on these type of things, but after downloading the source and getting it up and running on my machine, I have to say it looks pretty cool.

The install took a couple of hours, mostly due to my non-existent python chops and having to download and install Mercurial. But after a while I was up and running and pulling down my twitter feeds and emails into the same location. Way cool.

Spelunking

But what kind of nerd would I be if I just got it running and stopped at that? ;)

In addition with directions on how to setup twitter and gmail aggregation, the default install comes with a single RSS feed baked in. But who can do with just one RSS feed right? So I set out to add another.

It proved pretty difficult just to find where the initial feed was set (Textmate Ack in Project failed me) so I hopped into the Raindrop chat room where Mark Hammond from the Raindrop team pointed me to the correct directory. (Mark also said he didn't know if anyone had even tried this yet, but I wasn't going to let that little detail stop me.)

With his help I soon had 2 RSS feeds dumping into my Raindrop. Cool. Sure they were being dumped into a single box that had the wrong heading, but it was a start.

The major thing that bugged me was that they are streaming in without links. Boooo. Feeds need links. So I start hacking at the JS and HTML implementation of those messages and before long I had an external link for each headline.

Missing the Point

I felt pretty accomplished, but I sat back and realized that I had missed the boat on the whole point of Raindrop. It's not just to aggregate data and then push you off to the outside world, from what I understand it's about being able to interact with all types of content from a single location. The external links had to go.

A quick look at the twitter implementation revealed what I needed, a link to couchdb doc that actually holds each story. The Raindrop UI is setup to handle the display of these so after digging in and finding out where to get the document id I was in business. I could view entries from multiple feeds, open them within the Raindrop application, and even archive them. (I have no idea where they go when they are archived, I just know they leave the front page.)

So all that work for this little diff:

diff -r e07f7793ad1b client/lib/rdw/story/templates/GenericGroupMessage.html
--- a/client/lib/rdw/story/templates/GenericGroupMessage.html  Fri Oct 23 15:10:01 2009 +1100
+++ b/client/lib/rdw/story/templates/GenericGroupMessage.html  Sat Oct 24 02:23:34 2009 -0500
@@ -4,7 +4,7 @@
   </div>
   <div class="message">
     <div class="content">
-      <span class="subject">${subject}</span>
+      <span class="subject">${subject} <a href="#${expandLink}" class="expand" title="${i18n.expand}">${i18n.expand}</a></span>
     </div>
   </div>
 </div>


It doesn't seem like much, but it represents several hours of education and paradigm examination and I'm proud of it.

Now, off to bed...

Friday, October 16, 2009

Predictive and Reflective Development Metrics

We spent some time this week working on Development Metrics. Without getting into all the details, I did want to share one illumination that came from those discussions.

Two Types of Metrics

What I realized when looking at the various metrics and methodologies out there is that they fall into two main buckets I'm currently calling "Predictive" and "Reflective." (If there are better industry-standard words for these, feel free to let me know.)

Predictive metrics are the measurements we put in place to gauge what we think is going to happen. Almost all code testing (Unit testing, interaction testing, etc.) provide us with predictive metrics.

Reflective metrics measure what actually happened.  All production monitoring (errors, performance, etc) fall into this bucket.

A Few Reasons to Care

Ok. That's great. Why does it matter?

I noticed is that this distinction is very important when communicating to people less technical, especially if those people are decision makers. I'm you've heard something to the effect of: "Why did the website break if we have all these tests?" A predictive/reflective vocab can really help this conversation stay high level instead of diving into the depths of the short falls of testing.

In addition, I think it can help decision making about where to utilize resources. When you look at resources in this light, it makes you realize how much effort you need to be spending on reflective metrics. Not that predictive metric don't mean anything, in fact I think that every reflective metric you care about needs to have a predictive process to guard it, but you may find your priorities lie in meeting your predictive metrics which may only be telling half the story. (And a wishful thinking version of the story at that.)

Even if the two above aren't a ton of help to you, this 3rd realization kind of blew my mind. When looking at "feature development and deployment" it seemed pretty clear that this is a predictive process. You think you know what a user or client is asking for. You think you know how a feature will be utilized. These are predictions.

On the other hand, user feedback and requests are reflective measurements. They let you know how things are actually being used and working and not working.

Here's the kicker. How much time do you spend in feature meetings trying to hash out the right functionality? And how much time do you spend talking to users, getting feedback, and following up on feedback? Hmmmm......

Would love to hear you metricize...