Bacon Honey-Mustard BBQ Chili

Posted on: Wednesday, Dec 16, 2009

In addition to programming all the time, I’ve also been known to hack my way through the kitchen whenever I get the time.  My latest creation was tested out at a chili cookoff last month where it took 2nd place for Best New and Different Chili.  I whipped up a second batch last night for a potluck at work, and after handing out samples around the office, I got a lot of requests for the recipe.  Please, do try this at home kids.

Marc’s Bacon Honey-Mustard BBQ Chili


  • 1lb Bacon strips
  • 2lbs Ground beef
  • 1 Onion diced (I used yellow, but can be varied for taste)
  • 1/2 jar Honey Mustard (5 oz?)
  • 3 tbsp Honey
  • 15oz can of Kidney beans
  • 32oz Tomato sauce (I used Classico Tomato & Basil)
  • 1 pack Caroll Shelby’s Chili Mix (includes cayenne pepper in separate packet, so you can set your own spice level)
  • 10 cloves of Garlic (yeah, you read that right)
  • 10 drops Liquid Smoke
  • 1 lime


  • Crock pot
  • Garlic press
  • Wok or other large frying pan


In a wok or frying pan, cook the bacon on medium-high heat, 4-5 strips at a time.  Get them to the point where they’re just turning brown, then remove and set aside.  Do not discard the bacon grease that accumulates in the pan, leave it there.  Chop the cooked bacon into bite-sized pieces, approx 1 inch on a side, and dump in Crock pot.  Once all the bacon is cooked, throw the ground beef in the wok and brown it in the bacon grease.  Empty the entire contents of the pan (fat, juices, meat and all) in the Crock pot.  Dice the onion and brown that in the wok, then empty into pot.

Add honey mustard, honey, kidney beans (including juice from the can), tomato sauce and liquid smoke to the pot.  The chili mix contains several packets of spices.  I used half of the cayenne pepper packet (to keep it mild) and all of the other packets, so toss those in.  Press the garlic and add to pot.  Cut the lime in half and squeeze into the mix.

You should now have filled the Crock pot with all the delicious ingredients, so mix the entire concoction together.  You can either cook it on High for 4 hours or on Low for 8 hours.  I did high the first time and low the second, and they both turned out great, so it’s really up to you.  Low lets the flavors seep in more, so I prefer it when I have the time.  Now grab a cocktail and wait till it’s done.

Once the cooking is done, stir it really well and set the Crock pot on Warm.  Grab a ladle and serve!

Did you like it?

If you enjoyed this recipe, leave a comment and let me know!  What did you like best?  What did you do differently?  If there’s interest, then I’ll post some of my other recipes as I make them.  Enjoy!

Merb Camp 2008

Posted on: Wednesday, Nov 5, 2008

A few weeks ago, I got to attend Merb Camp on the UCSD campus.  For those of you that haven’t heard yet, Merb is a new web framework written in Ruby, which many consider to be the spiritual successor to Rails.  Essentially where DHH (creator of Rails) professes that “the Rails way” is the way to go, Merb takes a much more Linux-style approach in figuring that individual components of the framework should be swappable for others with as much ease as possible.  Equally as important, Merb is thread-safe, allowing it to scale much more easily than Rails.  I haven’t yet gotten to cook up any releasable projects with it, but I’m very excited to get going on it.

Anyway, Merb Camp had ~150 attendees at the CalIT2 building.  We were also joined by hundreds more that were watching the live webcast from all over the world and participating in the discussion via IRC.  When the Q&A sections after each talk happened, I volunteered to read out the questions that came in via IRC, so that our international guests got a chance to be a part of the fray.  Once we got to the Merb Team Panel discussion on Day 2, I was asked to moderate the panel.  The video has since been posted on the Merb Camp Video section of the site.  Here is the direct link to the panel video where you can see me in impromptu action.

All in all, it was a very fun conference, where I got to meet all kinds of developers from all over the world.  I can’t wait for the next one!

Final Impressions of COGS 121

Posted on: Thursday, Mar 13, 2008

After 10 weeks, I’ve now been able to watch all of the final presentations from the different groups. Considering that my group was the only “experienced programmer” team, I really didn’t know what to expect from the others. All in all, I’ve been incredibly impressed by the various projects they’ve managed to produce. Most importantly, it’s clear that the quality of the project ideas has nothing to do with programming ability. Implementation is another thing, but even groups of self-proclaimed “newbies” with only (prior to this class) basic HTML exposure came out with some very cool projects.

Something I hadn’t expected, but makes sense in retrospect, is the correlation between prior experience and willingness to learn new things. The class represented a huge cross section of prior-experience from student to student. Overall, those who knew the least starting out were the ones most willing to embrace new ideas and technologies. The eagerness with which the “beginners” jumped into their ideas was truly inspiring.

Soapbox: Pro’s and Con’s

I spent a decent amount of time in and outside of class evangelizing Rails since it was so appropriate for a lot of the project ideas out there. This had different effects on different people. One group actually ran with it and built their entire project with it. A few people were probably bored to tears by my rambling. The majority of the class now knows me as the “Ruby on Rails guy”. I’m glad that I was able to expose more people to the framework, but I don’t think I was effective in conveying the core concept that I work by:

If you’re going to tackle a problem, use the best tools available.

Rails is a great tool for a wide variety of web apps. It is not the be-all end-all solution to life’s endless problems. I think I came across more as “Rails is the answer to everything” instead of what I was shooting for “Use new tools, not just what you already know.” A large part of this problem stems from our own group using Rails for the backend; something I was actually trying to avoid.

The aim of the class is for each person to learn new technologies and develop something with them. Our project, due to the nature of its complexity, was going to require a lot of coding. Once it became clear that we were going to need our own backend DB, I wanted to avoid using PHP. Not because it wouldn’t work just fine. There’s nothing wrong with it, especially for a project this small. The problem, as I saw it, was that 4 out of 5 people in our group had a medium to high level of experience with it. That doesn’t allow for a lot of “new” learning.

My motivation for using Rails in our group was to introduce already-skilled programmers to something new, even if it meant limiting my own exposure to “new” things. For my own learning, I made sure to take time outside of our group meetings to teach myself other pieces that I haven’t seen or used before. The most visually interesting result of that is the drag-n-drop ordering of Categories. Working out the SQL queries needed to find stores that are open at a given time definitely stretched my boundaries. Obviously not as major as learning a whole new language, but it was a great way to push myself beyond what I already knew coming in.

If You Could Go Back…

Were I to do it all over again, I would have spent more time trying to preach the core philosophy of “use the best tool” instead of keeping the focus on Rails. For example, I only realized last week that we, it appears, are the only group using any kind of version control. Giving a talk on basic SVN usage, I think, would have been valuable to the class as a whole.

I had a lot of fun working on Open Past Midnight for this class. After all, I’m a programmer. But the neatest part of this class was getting to see what other people came up with, especially when they had to work with limited experience. That constraint can be incredibly discouraging, but they pulled through beautifully.

Open Past… what day?

Posted on: Monday, Mar 3, 2008

It’s been too long since the last update, especially since we’ve overcome a lot of hurdles in building this system. I’ll talk about each in its own right.

Hours of Operation

The primary quandary we faced was in how to represent a store’s hours of operation in the database. The obvious implementation is to have separate columns for each day’s open and close times. This gives us 14 columns added to the ‘store’ table, named ‘monday_open’, ‘monday_close’, ‘tuesday_open’… and so on. Since we have a different column for each day, it would make sense to have the column be of type ‘time‘ right? No!

Especially because we’re dealing with locations that are open late, we need to deal with cases like “On Monday, we’re open from 10am – 2am”. If the columns only handle time, then we’ll have a store with monday_open = 10:00 and monday_close = 2:00. That means we need to do some serious condition checking to see if its open. Better yet, how do we query for all stores that are open at a specific time? Not very efficient.

So how did we deal with this? We want the columns to be able to know not just the time of day, but also the day of the week. For the example above, we want the columns to read more like monday_open = Mon 10:00 and monday_close = Tue 2:00. This would make queries a whole lot easier, as we can resort to a simple open < right_now < close test, without having to create extra logic. But SQL databases don’t have a type to represent time + day of week. The options are time (hours, minutes, seconds only) or fullblown datetime (year, month, day, hour, etc). Since we’ve concluded that time is not enough on its own, we’re forced into using a full datetime field.

But wait! That means monday_close will look more like March 4, 2008 2:00 -0800. How are you going to deal with the comparisons when it’s now June? The condition will always fail, thinking the store closed months ago! And that’s where our solution comes in. The solution is, to some degree, in the question. Our comparisons will involve the entire datetime, but we only care about day-of-week and time. Therefore, we just need to pick arbitrary values for the rest of the datetime fields and have the code enforce their usage. We ended up using the week of Jan 1, 2007 for our project, primarily because Jan 1 is a Monday, making it very easy to translate a number to day-of-week. Throw in a few helper functions to make the date translation seamless to the user, and we’re set!

Website Hosting

As I said in my previous post about web hosting companies, DreamHost is a great cheap shared-hosting solution, but its Rails support leaves a bit to be desired. I was pleased to see that they’ve cleaned up their documentation, and getting a Rails site running requires a lot fewer hacks now. But the speed issue is still a huge one. Big enough for me to still say that for a commercial Rails app, I would not use them. For a class project, they’re perfect.

Introducing the Team to Rails

We’ve transitioned the project to Rails completely, which has made, as usual, a lot of grunt work disappear into thin air. It’s working well for what we need from it, albeit a very simple site. The real issue with Rails is that, being so new, still very few people know about it, much less have used it before. Whenever introducing a group of people to a new technology, you’re going to have mixed results, and our team is no exception. Some have taken to it excitedly, some with reserve, and some are just not very interested. All this is fine, especially since there’s plenty of work to do outside of Rails. It’s interesting to have such a cross-section of reactions to it all working together.

Interfacing with Google

Since our project employs Google Maps to display store locations, we need to interface with their API. More importantly, we need to load latitude and longitude coordinates of the stores into our own database. Since we all quickly agreed that we didn’t want to do that by hand, we worked out an alternate solution.

The interface to add a new store to the database employs Google Local Search to find stores that are already in Google’s system, and therefore already have all of the information we need (except hours). From the search results we get back from Google, we can click a button that then pushes the data up to our own server. Then we go in and set the hours of operation and assign a category. It took a lot of tweaking to get it working right, but after a few hours of Rodolphe working on the client-side Javascript and me working on the server-side receiving end, we got it ironed out.

Open Past Midnight – Prologue

Posted on: Sunday, Jan 27, 2008

So our COGS group has decided to do a Google-maps based site that allows the visitor to find nearby restaurants (maybe stores, etc) that are open at different times of the day.  We’ve set up some framework stuff (SVN, domain, etc) and need to start getting things together.

By Thursday (1/31/08) we will have mapped out the data for the following locations.  This means we need to go to those areas and grab all of the data about the various stores (name, phone, address, hours) and submit it to Google’s database (or verify that they already have it.)

  • Chris: Costa Verde shopping center
  • Evan: Ralph’s shopping center
  • Rodolphe: Whole Foods shopping center
  • Marc: Einstein’s Bros shopping center
  • Vivien: Vons shopping center

Our in-progress feature list:

Must-Have Feature Set

  • Default time should approximate the user’s local time
  • An interface to allow the user to change the time displayed
  • Search for locations based on name and/or time

Would-Be-Nice Feature Set

  • Mobile-access compatible (iPhone, etc)
  • Heatmap of neighborhoods to display density of locations open at the given time

Star Micronics TSP600 receipt printer driver released

Posted on: Thursday, Jan 17, 2008


Two very kind users have repackaged the driver for Ubuntu 9.10 Karmic Koala. The old package is still available below, but the new one is here: starcupsdrv_2.3.0-1ubuntu1_i386.deb

Old driver
Finally! After very popular demand, I’ve managed to post starcupsdrv_2.3.0-0ubuntu1_i386.deb just for you! Since WordPress is not nice about uploading .deb files, I’ve put it on a separate sub domain with a direct download link. No special scripts or sign-in required.

If you download and use the deb file, I’d appreciate if you could leave a quick comment on this post to say hi. If the deb is really that popular, I’ll work on expanding support for it.

Once you’ve downloaded the deb, you can install it with the following command:

dpkg -i starcupsdrv_2.3.0-1ubuntu1_i386.deb

Personal Goal #837: Brick Breaking

Posted on: Tuesday, Sep 18, 2007

One of my philosophies in life is to set personal goals that seem almost impossible to reach. Then reach them. Yes, it sounds ridiculously cliche, but it moves me forward. One such goal of mine was to break 8 bricks. While I didn’t quite nail it last November, I got close.

This year, I plan to try my hand (literally) at 9 instead.

Setting up your own Ubuntu repository on Dreamhost

Posted on: Monday, Sep 17, 2007

Following up on my post about installing a receipt printer on Ubuntu, I mentioned that I would post my experiences setting up a Ubuntu repository. Most of the documentation out there only covers setting up a repo on your own local machine, for your own local use only. On top of that, the docs vary between how much they tell you about the specifics of a Ubuntu vs Debian setup. Since I am likely not the only one trying to make this kind of thing, I’m posting a step-by-step guide on how to do it yourself.

The Goal

We want to setup a publicly accessible Ubuntu repository so that we can distribute our own .deb packages to other users. Since we’re on a budget, we’re going to host this repo on our DreamHost account. Above all else, it needs to be easy to maintain, since we want this to make life easier, not harder.

The Plan

We’re going to create a local repository on our development machine using dput and mini-dinstall. Then we’ll sync it to the remote web server using rsync and ssh.

The Prerequisites

As always, this guide assumes that you are familiar with Linux and are comfortable doing things on the command line. If not, turn back now, before it is too late! This is not a guide for beginners.

We’re also going to assume that you’ve got a hosting package with DreamHost, and a domain name to which you can add sub-domains.

Lastly, we’ll assume that your development machine is running some form of Ubuntu Linux. I used Feisty (7.04) when writing this article. Adjust the steps as needed for your own setup.

The Process

1. Set up a domain

To keep your website clean, it’s a good idea to set up a sub-domain of your primary site on which to host your repo. Using my site as our example, we set up as our domain. On DreamHost, you want to set this up as a Fully Hosted Domain. If you know what you’re doing, you can customize the options on the setup screen, but the defaults are all fine.

2. Configure your SSH login to the new domain

To make life easier down the road, we’ll need a password-less SSH login to the site. I won’t go into the details of how to do this, as it has been documented far better in countless places. The DreamHost wiki, for example. I’ll assume from here on out that you can simply type “ssh” and get dropped into a bash shell.

Again, if you don’t understand that last sentence, stop now, before you hurt yourself.

3. Create a folder for the public repository

To keep things clean, I like to segment things a bit. This is an optional step, but one that I suggest doing unless you have a really good reason not to. Log into your server real quick and in the webroot, make a folder called ‘archive’. In the case of our example, this would go something like this:

mkdir archive

4. Set up your repository… locally

I know, I know, we don’t care about having a local repo, we want to share it with the world! Sit tight, junior. This is part of the process. On shared hosting, you likely won’t have access to the tools needed to easily maintain a repository, and for good reason. So we’re going to set it up (and yes, keep it there) on our development machine first.

Rather than rewrite it, I’ll point you to the excellent guide that I followed called LocalAptGetRepository, over at the Ubuntu Community site. Follow that all the way down to (but not including) the section called “Changing your systems apt-get sources.list“.

All done? Great! Let’s move forward.

5. What you should have so far

At this point, you should have a folder somewhere on your dev machine that looks something like this:


And in those folders should basically be everything you need for your repo, hopefully with one or more packages already imported. If not, I suggest getting to that point before you move on.

6. Copying your local repo to the remote webhost

We can simply type in the rsync command to get all our files up there, but we want this to be easy later on, remember? So let’s make a simple bash script that can do it all for us, so we don’t need to constantly retype the whole thing. I recommend calling it something simple like “sync-repo“. Stick the following into it, making sure to adjust for your setup.

A few notes about the script: WordPress messes up the display of the rsync command, so there are two things to pay attention to.

  1. The three lines, starting with the rsync one, should all be on one line.
  2. The long dashes in front of “stats”, “progress”, and “exclude” should actually be two normal dashes.
rsync -rlptvzh --stats --progress --exclude=mini-dinstall
-e "ssh -l user" /home/marc/archive/

Now save the script, chmod +x it, and run it! If all went well, you should be able to point your browser at the equivalent of and see the same folder tree as the one on your machine. You’ll notice that we excluded the mini-dinstall folder, because that should not be made public.

The Beautiful Finish

The final step is to add your brand new repository to your /etc/apt/sources.list. Open up the list in your favorite editor:

sudo vim /etc/apt/sources.list

Then add in the lines for your repo:

deb feisty/
deb-src feisty/

Update your cache and you’re done!

sudo apt-get update

The Conclusion

If you’ve followed along, you should now have your very own Ubuntu repository. To add new packages to it, just use dput on your local machine, then run your sync-repo script to update the online version. It’s that easy!

So for one of my clients, I’m setting up a Ubuntu-based Point-of-Sale (POS) system. Part of the headache of getting this going (on top of actually building an interface) is getting the receipt printer to work. One of my biggest pet peeves is manufacturers who provide “Linux drivers” but only pre-compiled binaries for crap like Red Hat. Thankfully, I got to choose the printer we ordered for the store, and Star Micronics provides GPL’ed source code to their drivers. Hooray! So we ordered a TSP600, black, USB interface, with the auto-cutter.

The next problem was getting their code to compile and install cleanly on a Ubuntu Feisty system. My goal in this was not only to be able to compile the library from source, but to package it into a .deb file for easy installation on the store computer. After a whole day of researching and testing, I finally got it to work.  It took some tweaking of the default makefile and, but the result is a working starcupsdrv_2.3.0-0ubuntu1_i386.deb, which is the CUPS driver for these Star printer models:

– TUP900 Presenter
– TUP900 Cutter
– TSP1000
– TSP800
– TSP700
– TSP600 Cutter
– TSP600 Tear Bar
– TSP100 Cutter
– TSP100 Tear Bar
– SP500 Cutter
– SP500 Tear Bar

As I’m not interested in being a full maintainer for the package, I am not planning on submitting it to the Ubuntu or Debian projects. If, however, you’re reading this and would like a copy so that you can set up your own printer quickly, just toss me an email. I’d be happy to send you the .deb to make your life easier.  Just install it, then plug your printer in, and it will be automatically recognized by your favorite CUPS administration utility.

The Packaging Experience

I had never packaged my own .deb files before, so I read a LOT of docs on the subject. First and foremost, let me reiterate what most guides start by saying: Packaging .deb files is a complicated process. Expect to spend a lot of time learning how and trying different things. That said, it is an excellent way to handle software distribution to client computers, especially when coupled with your own software repository. I’ll post my experience setting up a repo later on.

For those that want to learn how to package an application into .deb files, the following are URLs that I recommend reading front-to-back, no matter how long they may be:

The Debian one is really just so you understand what you’re doing. The Ubuntu Packaging Guide is the real setup of what you want to follow (I focused on the debhelper method). Pbuilder is a tool you simply cannot do without, and the How To is really an explanation of how to set it up. It may seem daunting, but in the end it’s pretty simple.

Now I have a basic understanding of how to create a package. Hardly something that every programmer should get familiar with, but if you work with Debian-based distributions much, I highly recommend trying out.

I started working on a new client site in Rails recently and reached the point where I was ready to launch a starter-version over at OCS. In the process, I wanted to upgrade a few of my practices and utilities, most notably Capistrano. I have been using the 1.4 series for a while, but this was a brand new site, so what better opportunity would there be to start using 2.0?

Since Capistrano 2.0 is relatively new to the scene, the documentation (as with most Rails-related projects) was lagging behind. The team changed a lot about how 2.0 works, making most of my old deployment recipes obsolete. The problem is, the info about how to write new ones is still very scattered and hard to navigate. OCS was suffering from much the same problem, with no detailed guide on getting a new Rails app ready and launched on their servers. Since I knew I’d need to repeat the process in the future, I decided to document the process as I went. Halfway through, I realized that I was likely not the only one who could benefit from such a guide.

The result of my efforts was posted later that day on the official OCS Support Wiki, as an article entitled “Step-by-step setup using Capistrano 2.0 and Mongrel” under their Ruby on Rails section. It even got a mention and some kind words on the OCS Blog.