Saturday, December 5, 2009

On to the Next Adventure

Monday morning bright and semi-early I start my next professional adventure. I'll be joining Sabre Holding Company. Sabre is a travel technology company that has as much past (developing the first electronic reservation system) as it does present (travelocity) and future (checkout tripcase) and I'm excited about the opportunities there.

I'll be joining the Travel Studios group which is tasked with experimenting with emerging technologies and products. Specifically I'll be working on Cubeless, an enterprise social network. I'm tellin' ya...these social network thingies...they're gonna catch on. ;)

I'm really excited about joining this organization, group, and product team. Thanks to everyone that's been supportive during the transition and I look forward to the things to come as a Sabre-ite...Sabreon...Sabreian...Sabrenian... ;)

Thursday, November 19, 2009

Change

There's no easy way to talk about this without just saying it...Friday, December 4 will be my last day at The Garland Group. I've decided to start on a different adventure that I'll talk about more as it gets closer.

Before I move on, I do want to expend just a few 1s and 0s on just how much I have enjoyed working at The Garland Group. Honestly, if I were to really get into every inch I've enjoyed, you'd probably think this "Garland Group" place doesn't really exist...it's just a fairy tale I've made up to make other work environments look silly.

You probably wouldn't believe that you can work next to someone that's in a different state or complete projects without ever formalizing work hours. You'd never imagine that encouraging people to go to the grocery store in the middle of a Tuesday could lead to results above expectations. You'd think there is no way a small company headquartered in the Dallas suburbs could change the way an industry thinks and talks about Compliance and that Security could involve more Collaboration than combinations.

Yes. I suppose it does sound a little far fetched, but it's been my reality. It's a tough place to say goodbye to and a hard group to part ways with. I have very much enjoyed my time at The Garland Group and though I am very excited about what is on the horizon, I'd be kidding myself if I said I won't miss it.

Wednesday, October 28, 2009

Abstract Analyzer

Here's a project I've been plugging away at:



(Big version here, or use the tiny full screen button in the bottom right of the movie)


Gem: http://gemcutter.org/gems/abstract_analyzer
Source: http://github.com/markmcspadden/abstract-analyzer

Saturday, October 24, 2009

Rolling with Raindrop

Today Mozilla announced a message aggregation project they've been working on called Raindrop. Now, I find myself slower and slower to fall into the hype on these type of things, but after downloading the source and getting it up and running on my machine, I have to say it looks pretty cool.

The install took a couple of hours, mostly due to my non-existent python chops and having to download and install Mercurial. But after a while I was up and running and pulling down my twitter feeds and emails into the same location. Way cool.

Spelunking

But what kind of nerd would I be if I just got it running and stopped at that? ;)

In addition with directions on how to setup twitter and gmail aggregation, the default install comes with a single RSS feed baked in. But who can do with just one RSS feed right? So I set out to add another.

It proved pretty difficult just to find where the initial feed was set (Textmate Ack in Project failed me) so I hopped into the Raindrop chat room where Mark Hammond from the Raindrop team pointed me to the correct directory. (Mark also said he didn't know if anyone had even tried this yet, but I wasn't going to let that little detail stop me.)

With his help I soon had 2 RSS feeds dumping into my Raindrop. Cool. Sure they were being dumped into a single box that had the wrong heading, but it was a start.

The major thing that bugged me was that they are streaming in without links. Boooo. Feeds need links. So I start hacking at the JS and HTML implementation of those messages and before long I had an external link for each headline.

Missing the Point

I felt pretty accomplished, but I sat back and realized that I had missed the boat on the whole point of Raindrop. It's not just to aggregate data and then push you off to the outside world, from what I understand it's about being able to interact with all types of content from a single location. The external links had to go.

A quick look at the twitter implementation revealed what I needed, a link to couchdb doc that actually holds each story. The Raindrop UI is setup to handle the display of these so after digging in and finding out where to get the document id I was in business. I could view entries from multiple feeds, open them within the Raindrop application, and even archive them. (I have no idea where they go when they are archived, I just know they leave the front page.)

So all that work for this little diff:

diff -r e07f7793ad1b client/lib/rdw/story/templates/GenericGroupMessage.html
--- a/client/lib/rdw/story/templates/GenericGroupMessage.html  Fri Oct 23 15:10:01 2009 +1100
+++ b/client/lib/rdw/story/templates/GenericGroupMessage.html  Sat Oct 24 02:23:34 2009 -0500
@@ -4,7 +4,7 @@
   </div>
   <div class="message">
     <div class="content">
-      <span class="subject">${subject}</span>
+      <span class="subject">${subject} <a href="#${expandLink}" class="expand" title="${i18n.expand}">${i18n.expand}</a></span>
     </div>
   </div>
 </div>


It doesn't seem like much, but it represents several hours of education and paradigm examination and I'm proud of it.

Now, off to bed...

Friday, October 16, 2009

Predictive and Reflective Development Metrics

We spent some time this week working on Development Metrics. Without getting into all the details, I did want to share one illumination that came from those discussions.

Two Types of Metrics

What I realized when looking at the various metrics and methodologies out there is that they fall into two main buckets I'm currently calling "Predictive" and "Reflective." (If there are better industry-standard words for these, feel free to let me know.)

Predictive metrics are the measurements we put in place to gauge what we think is going to happen. Almost all code testing (Unit testing, interaction testing, etc.) provide us with predictive metrics.

Reflective metrics measure what actually happened.  All production monitoring (errors, performance, etc) fall into this bucket.

A Few Reasons to Care

Ok. That's great. Why does it matter?

I noticed is that this distinction is very important when communicating to people less technical, especially if those people are decision makers. I'm you've heard something to the effect of: "Why did the website break if we have all these tests?" A predictive/reflective vocab can really help this conversation stay high level instead of diving into the depths of the short falls of testing.

In addition, I think it can help decision making about where to utilize resources. When you look at resources in this light, it makes you realize how much effort you need to be spending on reflective metrics. Not that predictive metric don't mean anything, in fact I think that every reflective metric you care about needs to have a predictive process to guard it, but you may find your priorities lie in meeting your predictive metrics which may only be telling half the story. (And a wishful thinking version of the story at that.)

Even if the two above aren't a ton of help to you, this 3rd realization kind of blew my mind. When looking at "feature development and deployment" it seemed pretty clear that this is a predictive process. You think you know what a user or client is asking for. You think you know how a feature will be utilized. These are predictions.

On the other hand, user feedback and requests are reflective measurements. They let you know how things are actually being used and working and not working.

Here's the kicker. How much time do you spend in feature meetings trying to hash out the right functionality? And how much time do you spend talking to users, getting feedback, and following up on feedback? Hmmmm......

Would love to hear you metricize...

Monday, September 21, 2009

Working without the Internet

Last week I spent 3 days working from College Station, TX. I spent most of my time detached from the interwebs and found it actually very very productive. 

The perfect balanced seemed to be having limited and easy to turn on and off access. While I didn't have wifi most of the time, I had setup my phone to be a tethered modem. Now this isn't exactly in line with the User Agreement of my carrier and that fact kept me from having it on the whole time.

The point is, I turned on the internet only when I needed it, which turned out to be less than 5 minutes every hour. Five minutes. That's crazy to me considering how many times per MINUTE I get dings and chirps and whistles all from internet driven activity.

Of course I wasn't doing many video conferences etc, but for getting actual work done, it seemed to workout really well. I may start trying this more often....we'll see.

Sunday, August 30, 2009

Lone Star Ruby Conference

I just got home from the Lone Star Ruby Conference. Of course I'm suffering from all the post conference side effects...bags under my eyes and a brain that feels like it's going to explode.

I just wanted to quickly get a few things out there:

LSRC is such a well run conference

Of course, Lone Star Ruby Conference is a great conference but what strikes me is how well it is organized and run. Everyone associated with putting on the conference should be very proud of the work they do.

It's always about the people.

Everyone always says it but that's because it's true. It was great getting to spend time with (in order of appearance) Jason, Wynn, Tim, Glenn, Dave, Mike, Marlon, Jim, Geof, Mike, Adam, James, Dana, Dan, Elena, Jeremy, Evan, Tim and Jeremy. (Sorry if I missed anyone!) Each person brings a whole set of smarts and experiences to our community and I'm always impressed just being privileged to their conversations.

But it's also about the code.

Being fully immersed into Ruby land for 3 days is like a breath of fresh air. The training day before the conference was well worth it and each talk on Friday/Saturday, whether it was technical in nature or not, helped make me a better programmer.

I know it will be a busy week catching up from being gone, but it was well worth it!

Wednesday, August 26, 2009

yaml.rb

I spent some time last week working on a problem that had me digging into Ruby's Standard Library implementation of yaml. I thought I'd take a few minutes to share some thoughts

Deep Serialization

To me, this was the reason I was using yaml over json or xml in the first place. I was in Rails and was eager loading an ActiveRecord object. I'd expect the #to_xml or #to_json methods to create a deeply nested structured document including all the associations I eager loading. But that's not the case.

However I was pleasantly surprised to find that #to_yaml did in fact do this. (It's not exactly apples to apples, #to_yaml is part of the Ruby Standard Library and #to_json and #to_xml are implemented in Rails.)

This was the home run feature for me so yaml and I started to get close.

Smart Class Transformations

Let's say our test.yml file looks like this:

--- !ruby/object:Family
kids:
!ruby/object:Kid
name: Beef
- !ruby/object:Kid
name: OshGosh
name: McSpadden

If you fire up irb and do:

require 'yaml'
YAML.load(File.read(test.yml))

You'll get something like:

#<YAML::Object:0x580408 @ivars={"name"=<"McSpadden", "kids"=>[#<YAML::Object:0x580660 @ivars={"name"=>"OshGosh"}, @class="Kid">]}, @class="Family">

So we've got some kind of yaml object with @ivars that looks to hold all the good stuff. Wouldn't it be nice if it just gave us Family and Kid objects! Yes it would be, but right now, since we're just irb-ing, YAML has no idea what a Family or Kid object looks like. Let's try this:


class Family; attr_accessor :name; end
class Kid; attr_accessor :name; end

YAML.load(File.read(test.yml))
#=> #<Family:0x576624 @name="McSpadden", @kids=[#<Kid:0x5768a4 @name="OshGosh">]>

That's what I'm talking about!

yaml-lab

I've found that when experimenting with new libraries it helps me to write a quick spec suite to test assumptions and understanding. In this vein, I've created a yaml-lab project on github with some of my experimentation. Feel free to clone it, fork it, whatever it. Here's the link yaml-lab on github.

That's all for now.

Sunday, August 16, 2009

An Hour with Chef

Today I sat down and decided I would block out an hour to try out Chef, an ultrahip, super buzzed "systems integration framework." I really don't have an immediate need for Chef (well at least not that I know of yet!) but I love the idea of managing servers "by writing code, not by running commands" and it's all open source, Ruby goodness under the sheets, so I thought I'd give it a whirl.

I install the gem and head over to the "Configuring Chef" page. Right away I'm informed that working with Mac OS X probably is NOT going to go well out of the box. I do a few google searches and see there are a couple of Mac OS X cookbooks out there, but for this exercise, I decide to stay as close to official Opscode supported code as I can, so I dive in.

I get my ~/solo.rb and ~/chef.json setup per the instructions and now it's time to run chef-solo.


mark-mcspaddens-computer:~ mark$ sudo chef-solo -c ~/solo.rb -j ~/chef.json -r http://s3.amazonaws.com/chef-solo/bootstrap-latest.tar.gz
Password:
[Sun, 16 Aug 2009 16:05:47 -0500] INFO: Starting Chef Solo Run
/tmp/chef-solo/cookbooks/ruby/recipes/default.rb:45:in `from_file': undefined method `each' for nil:NilClass (NoMethodError)
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/cookbook.rb:139:in `load_recipe'
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/recipe.rb:79:in `include_recipe'
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/recipe.rb:64:in `each'
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/recipe.rb:64:in `include_recipe'
from /tmp/chef-solo/cookbooks/passenger_apache2/recipes/default.rb:26:in `from_file'
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/cookbook.rb:139:in `load_recipe'
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/recipe.rb:79:in `include_recipe'
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/recipe.rb:64:in `each'
... 16 levels...
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/lib/chef/application.rb:57:in `run'
from /usr/local/lib/ruby/gems/1.8/gems/chef-0.7.8/bin/chef-solo:26
from /usr/local/bin/chef-solo:19:in `load'
from /usr/local/bin/chef-solo:19


Awesome. Well, I was warned. Since this is just an educational exercise for me, and decide to dig in and see what I can find. So I pull up /tmp/chef-solo in TextMate, and look at /tmp/chef-solo/cookbooks/ruby/recipes/default.rb. Ok. I see what's going on here. We're trying to install some ruby packages and my OS isn't listed the case statement. Well I'm a Ruby developer and know I already have all the Ruby libs I'll need, so let's just add an else case with an empty array.


# cookbooks/ruby/recipes/default.rb

# Line 24: Start of case
# Line 45: Add else case
extra_packages = case node[:platform]
when "ubuntu","debian"
%w{
ruby1.8
ruby1.8-dev
rdoc1.8
ri1.8
libopenssl-ruby
}
when "centos","redhat","fedora"
%w{
ruby-libs
ruby-devel
ruby-docs
ruby-ri
ruby-irb
ruby-rdoc
ruby-mode
}
else []
end



I'm pretty confident that will fix our error and not leave us without any libs we'll need, so let's try again this again, only this time we won't tell chef-solo to use the remote cookbook and by default it will use our local one instead. (Which is a big bonus...if I jack things up too bad, we just go back to the remote version.)


mark-mcspaddens-computer:~ mark$ sudo chef-solo -c ~/solo.rb -j ~/chef.json
Password:
[Sun, 16 Aug 2009 16:32:48 -0500] INFO: Starting Chef Solo Run
[Sun, 16 Aug 2009 16:32:54 -0500] INFO: Installing gem_package[stompserver] version 0.9.9
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Creating directory[/stompserver] at /stompserver
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Setting mode to 755 for directory[/stompserver]
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Creating directory[/stompserver/log] at /stompserver/log
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Setting mode to 755 for directory[/stompserver/log]
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Creating directory[/stompserver/log/main] at /stompserver/log/main
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Setting mode to 755 for directory[/stompserver/log/main]
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Creating template[/stompserver/run] at /stompserver/run
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Setting mode to 755 for template[/stompserver/run]
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Creating template[/stompserver/log/run] at /stompserver/log/run
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Setting mode to 755 for template[/stompserver/log/run]
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Creating a symbolic link from -> /etc/init.d/stompserver for link[/etc/init.d/stompserver]
[Sun, 16 Aug 2009 16:33:03 -0500] INFO: Creating a symbolic link from /stompserver -> /stompserver for link[/stompserver]
[Sun, 16 Aug 2009 16:33:03 -0500] ERROR: service[stompserver] (/tmp/chef-solo/cookbooks/runit/definitions/runit_service.rb line 65) had an error:
No such file or directory - /etc/init.d/stompserver status



Ok...something's going down with stompserver. Quick 'gem list' shows I have the gem. Let's see if the stompserver command works.


mark-mcspaddens-computer:~ mark$ stompserver -h
Usage: stompserver [options]
-C, --config=CONFIGFILE Configuration File (default: stompserver.conf)
-p, --port=PORT Change the port (default: 61613)
-b, --host=ADDR Change the host (default: localhost)
-q, --queuetype=QUEUETYPE Queue type (memory|dbm|activerecord|file) (default: memory)
-w, --working_dir=DIR Change the working directory (default: current directory)
-s, --storage=DIR Change the storage directory (default: .stompserver, relative to working_dir)
-d, --debug Turn on debug messages
-a, --auth Require client authorization
-c, --checkpoint=SECONDS Time between checkpointing the queues in seconds (default: 0)
-h, --help Show this message
mark-mcspaddens-computer:~ mark$ which stompserver
/usr/local/bin/stompserver


Ok I have stompserver, but I'm not getting a link into /etc/init.d/stompserver. So I hack around with trying to pass this directory into the runit_service command in the stompserver default recipe. After a few unsuccessful tries, resort to my old caveman ways and run a command:


sudo ln -s /usr/local/bin/stompserver /etc/init.d/stompserver


Time to run chef-solo again...it spins for a while...and spins for a long while...so I decide to investigate. I know the next command we're trying to hit is "/etc/init.d/stompserver status" So I run that in another Terminal and get:


mark-mcspaddens-computer:~ mark$ /etc/init.d/stompserver status
status
MemoryQueue initialized
TopicManager initialized
QueueManager initialized
Stomp protocol handler starting on 127.0.0.1 port 61613
/usr/local/lib/ruby/gems/1.8/gems/eventmachine-0.12.0/lib/eventmachine.rb:500:in `start_tcp_server': no acceptor (RuntimeError)
from /usr/local/lib/ruby/gems/1.8/gems/eventmachine-0.12.0/lib/eventmachine.rb:500:in `start_server'
from /usr/local/lib/ruby/gems/1.8/gems/stompserver-0.9.9/bin/stompserver:34
from /usr/local/lib/ruby/gems/1.8/gems/eventmachine-0.12.0/lib/eventmachine.rb:224:in `call'
from /usr/local/lib/ruby/gems/1.8/gems/eventmachine-0.12.0/lib/eventmachine.rb:224:in `run_machine'
from /usr/local/lib/ruby/gems/1.8/gems/eventmachine-0.12.0/lib/eventmachine.rb:224:in `run'
from /usr/local/lib/ruby/gems/1.8/gems/stompserver-0.9.9/bin/stompserver:17
from /etc/init.d/stompserver:19:in `load'
from /etc/init.d/stompserver:19


[NOTE: After the steps that follow, I realize I probably got this error because chef-solo was still running while I tried it. But it's still always fun to see things step by step, so I'll include my next steps even through they're probably not necessary.]

Super. I'm waaaayyyy past the "not running commands" so I do "sudo gem install eventmachine" and it upgrades me from 0.12.0 to 0.12.8. I kill the original chef-solo command (which is still just hanging) and next, I see if these acts have appeased stompserver.


mark-mcspaddens-computer:~ mark$ /etc/init.d/stompserver status
status
MemoryQueue initialized
TopicManager initialized
QueueManager initialized
Stomp protocol handler starting on 127.0.0.1 port 61613


This sits there for a while, but no errors, and since we're trying to run some kind of server here, I take no news as good news. I kill off the stompserver and decide it's time to try chef-solo again. And it hangs. But I've got an idea.

In one terminal I do "sudo /etc/init.d/stompserver status" and after that is rolling I open a different Terminal and run chef-solo. Looks like we're getting somewhere.


Sun, 16 Aug 2009 17:13:33 -0500] INFO: Starting Chef Solo Run
[Sun, 16 Aug 2009 17:13:34 -0500] INFO: Creating a symbolic link from -> /etc/init.d/stompserver for link[/etc/init.d/stompserver]
[Sun, 16 Aug 2009 17:13:34 -0500] INFO: Creating a symbolic link from /stompserver -> /stompserver for link[/stompserver]
[Sun, 16 Aug 2009 17:13:39 -0500] INFO: Installing package[apache2] version 2.2.10


So I've got stompserver behind me, but this Apache install is taking a bit of time...and then I get.


[Sun, 16 Aug 2009 17:18:08 -0500] ERROR: service[apache2] (/tmp/chef-solo/cookbooks/apache2/recipes/default.rb line 30) had an error:
wrong number of arguments (0 for 1)


Ok. Let's open up the apache2 cookbook and see what's up. I open up /tmp/chef-solo/cookbooks/apache2/recipes/default.rb and start trying a few ways on including "mac_os_x" and my specific system setup in into a few of the case statements, but after about 5 minutes I've still got the same error.

----

And my hour is up*. Kind of a bummer...but not really. To me, this is really kind of cool. Sure I would have loved to get my chef server configured, but getting to crack open a few recipes and even the chef source itself is a really cool exercise.

Next time I go Chefing, I'll probably start with one of the Mac OS X cookbooks out there or fire up my Ubuntu laptop that normally sits idle in the office. Honestly, I would love to see an Opscode endorsed/supported/linked_to Mac OS X cookbook, it would have given me the confidence to start with one of those today. (But then again I may have missed out on some good code spelunking.)

Overall I enjoyed my hour with Chef and look forward to trying this all again at a future date. I'm very surprised at how straight forward the framework seems to be and how easy it was to jump in and get dirty with it.

All this talk about Chef, and cookbooks, and recipes has me hungry. Time for a snack! :)


* A younger less disciplined me would have kept hacking until I got it working, but that really wasn't the point of the exercise. Also, the timestamps will show I went a little longer than an hour. My brother called and we chatted for a bit. Family > Code...most of the time.

Wednesday, July 1, 2009

Kindle 2 PDF Conversion and Special Characters in the Filename

A quick note about an issue I just ran into while trying to convert PDFs into the special Kindle .azw format using the username@free.kindle.com email address.

When either the "(" or ")" or even the "-" characters were in my file name, I got an error. Removed them, and the pdf converted just fine.

Just an FYI.

Tuesday, June 30, 2009

Did you know we were gone?

Last week, Shevawn and I headed up to beautiful Lake Tahoe for a 4 day escape-the-heat-athon/anniversary-celebration. (Photos for those interested.)

One of the decisions we made before we left was to NOT talk about our trip on facebook or twitter until we were back at home. The deal is that it's not that hard to find out where we live and letting everyone know we'd be out of state for several days didn't sound like a great idea for the security of our house and possessions.

So we didn't talk about it online, and you know what? It was kind of hard. We belong to these communities because we like the people in them (generally speaking) and we've grown accustomed to sharing what we are doing (and getting feedback) in real time. Sure, we shared things when we got back, but by then, those things were yesterday's news, old and stale in our now real-time information flow.

The question I have is this: Do you twitter/facebook leading up to and while you're on vacation? Why or Why not?

Monday, June 29, 2009

Well looks like the migration hasn't happened yet

Maybe we'll take care of that sometime soon.

If you're looking for some of the old content from markmcspadden.net checkout this google search.

I do hope to get it all migrated over at some point, but I'm not going to let that hold up future posts.

Tuesday, January 13, 2009

Migrating

I am working on migrating the old markmcspadden.net to this blogger site. Knowing how things go, it will probably be at least a month until it's all here.