Category Archives: Uncategorized

SnapCI: everything I ever wanted

Some time ago, I worked at ThoughtWorks.  It was my job to ensure that the CI server was accurately giving feedback to the team,and that we had a remote hope of deploying applications into production.  ThoughtWorks had (and has) some incredible people: I probably should have stuck around longer.

At the time, I was frustrated with the CI tools that we had.  The three different ports of CruiseControl that we tended to use (probably why Jenkins ate CruiseControl’s lunch) all suffered from similar issues:

  •  Configuration wasn’t often easy, especially when dealing with configuration that spanned multiple projects/jobs.  You could write a build-breaking configuration which would break the next commit, through no fault of the developer who committed it.
  • From about 2005 it was clear that this Build Pipeline thing had legs.  But it took a long time until the tools caught up with the Build Pipeline concept.

The industry still has a long way to go to help us fulfil the promises of Agile and Continuous Delivery; but I’m happy to report that my problems are solved, and the tool I’m using to solve it is Snap CI, from my old colleagues at ThoughtWorks.

We’re using it on a few Git based projects at work, and it’s very reliable and simple.  The biggest project has 4 stages:

  • Fast Feedback
  • AMI
  • Staging
  • Production


It shows one real build pipeline for all of those stages.  Any time I make a configuration change to a pipeline or any of it’s stages, the entire pipeline triggers.

Git helps a lot; if I need to know what the current HEAD is, the build scripts can just ask;  each stage has the same access to git.  GitHub integration means that we can control access to Snap via Teams in Github.

 We can store passwords and credentials securely in the pipeline.

 They’re also rolling out Docker support.

The best part of it? I’m not slaving over a hot CI server; I’m working on projects that are useful to my employers instead.  There’s never been a better time to be a developer.

DevOps Conference Calendar

Neal Mueller asked me to share his DevOps Conferences site.  Nice to have a list of relevant conferences.  I’d love to be able to subscribe to them, so I could be reminded to look into proposals before the deadline.  Mind you, that would mean leaving the house.

Containers is the new AWS in CI

When Atlassian came out with AWS integration, it was a great step forward.

Jenkins announced support for Kubernetes a few days ago, and I think many vendors will be accelerating plans to support Docker ( and then making it easy to developer apps that can cluster on Kubernetes).

I don’t use Jenkins any more (mainly because I’m exceptionally lazy), but it’s good that they’re escalating this arms race.

DevOps vs. SCM

There’s a team of people in your company.  They’re responsible for:

  • Storing built versions of your code in a repository
  • Ensuring that you can reproduce each one of those builds
  • Tracking changes in the projects
  • Baselining and merging code branches

Is that the DevOps team?  No.  It’s a Software Configuration Management team, and they’ve been around as long as there’s been developers.

I believe that these teams will all rebrand as DevOps teams (let’s ignore the fact that you can’t engender collaboration by making a team responsible for it); but I also think there’ll be fewer of them.  Here’s why:

  • Tools are getting better.  There’s some nasty version control systems in the dustbin of history.  Now most people can use Git, and there’s no need for a tools expert to help people branch.
  • Teams are getting smaller.  It’s never been a better time to code: we can do so much with a dynamic language and a credit card.  Do we need teams of people to help us integrate gargantuan codebases any more?
  • Operating Systems suck less: Tell us about your issues writing scripts for HP-UX and Solaris. Pepperidge Farm Remembers.  We’ve got quite the homogenous environment now: you need to your apps to build on OSX and Linux, or Windows.
  • Virtualisation and Containers let us test the world: where we used to have test environments with people who would be paid to keep them in synch (that was a cushy number), we can now reproduce the entire thing in docker on your cousin Johnny’s old MacBook.

We’re still doing SCM and CM.  Interestingly, we now seem to use the term Configuration Management as a synonym for System Configuration Management (e.g. Puppet, Chef, Ansible) instead of the generic discipline.

We’re still doing CM in the small, without thinking of it:

  • made a production branch? CM.
  • Identified what changes are in a release? CM.

Just remember your history.

APIs are eating your consulting business

Last year we engaged a firm to do a non-core (and frankly, annoying) IT project.  We reasoned that their experience in their domain would mean that they’d have a better chance of success.

It turned out that their business was built around software from a third party supplier, and a thick veneer of bullshit.  Of course, there were issues.  What floored me was that these consultants were helpless, like a 10 year old flying an aircraft.  They didn’t have any expertise, or a plan B.

We ended up escalating to their supplier, and in the end I fixed the issue with some bash scripts.  This wasn’t a good outcome; I didn’t want to be the hero.  I would have been happier to be told that under the bullshit and the organisational silos was a root cause that had an ETA for a fix.  A week after I declared hollow victory, the root cause was fixed.  Bah.

Fast-forward a few months, and we need to do some similar projects.  Naturally, I won’t be going to these consultants who are selling billable time.  I’ve been offered someone (from another organisation) who can do “some of the boring work”, but that misses the point: the rise and rise of APIs (and dynamic languages) mean that there’s a decreasing need to throw people at our problems.

If I’m going to accept any help, it’s got to be someone who can look at the problem domain, choose a tool and then write appropriate scripts (if needed) to reliably solve my problem.  If your business is built on humans racking up billable hours to do tasks that can be automated, all I can say is this:


Upcoming conferences that I won’t be at, April 2015

I wanted to share the details of a few conferences.  I won’t be there, because I enjoy a life that’s mostly free of jetlag and commuting.

  • The Jenkins peeps are doing a world tour of the US East Coast, Europe, Israel, and the US West Coast.  That’s over summer if you’re in the Northern Hemisphere.  They’ll be doing a CD Summit at each of those conferences.
  • CITCON Europe is in Helsinki, in September.
  • CITCON North America is in October this.  In Ann Arbor, Michigan.
  • We hope to get CITCON Australia New Zealand organised for 2016.  We need lots more conferences in New Zealand; we have great coffee, a delightful semi-tropical climate and Raspberry Lamingtons.
  • At Neo, we’re doing GraphConnect Europe (in London) in May.

The 80/20 rule in Cloud Development

Guest Post by Brian Whipple, Marketing & Communications Manager at

There is a commonly known rule in business called the 80/20 Rule. Introduced as an economics rule to explore distribution of wealth, the 80/20 Rule has become a common business management principle, defined by this common “rule of thumb”:

80% of effects come from 20% of causes and 80% of results come from 20% of effort.

I often use the 80/20 rule to shed light on my own work habits, or to analyze cause and effect on results that I may have seen.

Recently, I read an older blog on the 80/20 rule and how it applies to software development. The author described how in software development, 80% of performance improvements are found by optimizing 20% of the code. Of course, this is not an exact science, but more of an estimation in order to self-evaluate where a developer spends time.

Through experience I have realized the fact that building a highly scalable, highly resilient, and highly available web application is extremely hard. Learning the ins and outs of AWS is a no small feat for even the most experienced of developers, let alone someone new to coding for the cloud.

As a result, we came up with an 80/20 rule of our own when discussing cloud development:

Only 20% of the features offered by AWS are utilized. The other 80% are extremely complex and are usually not implemented/maintained.

Due to the fact that building enterprise web apps is not always easy, the developers at our company, Cycligent, have found ways to optimize work for the best results. Here are a couple of ways that we have put the 80/20 rule into action:

We found that heavy client-side applications often suited best the applications that we were developing. When they were combined with Node.js and MongoDB on the backend, we reaped a lot of benefits. The line between a front-end developer and a back-end developer became much more blurred, making our developers more versatile. Impedance mismatch issues stemming from the client and server speaking different languages went away. This increased developer productivity due to less context switching between the frontend and backend code. In short, a small amount of time up front choosing good tools reaped a lot of benefits later.

We found that despite the promises of the cloud, it’s not always easy. Spending some time up front to abstract away as many of the complexities as possible make our developers much more productive, happier, and it lowered our defect rate. To give a specific example, when moving to the cloud and dealing with scaling our application and distributing the load across many servers, developers often got tripped up by when and how to properly communicate over a message bus. To alleviate that, we spent some time making it so the message routing was handled automatically behind the scenes. Developers only had to focus on the actual backend logic, instead of worrying about when and where the messages had to be exchanged, or that there was even a message bus at all.

While it is a struggle to build and maintain the highly available and distributed web applications, in our opinion the benefits of utilizing AWS for cloud development far outweigh the bad. We have implemented a few tools and processes to produce results in our company, and there are probably a lot more out there. We believe that in a lot of scenarios, finding tools and processes that line up with your company’s development process could lead to 80% of positive results.

Shameless Self Promotion

I am the Marketing & Communications manager with Cycligent. I contacted Julian Simpson because I respected the community of developers that have developed around influential blogs such as and want to participate in contributing to the community. Also, I wanted share with you our new cloud platform that simplifies coding for the cloud. Go to to learn more, and to sign up for a 30 day free trial.

Toxic Repo

If you can’t dispose of toxic waste (say, by burning it or launching it into space using surplus ICBM’s), then you probably need to contain it: stop innocents from stumbling across it, or stop the malicious from using it for malicious projects.

The same issues apply to your source tree.  If you have Amazon Web Services credentials checked into a project on GitHub, that’s a toxic repo.  You’ll want to contain to protect people from intentionally or unintentionally damaging the resources that can be accessed from the credentials.

One of the problems of having your own toxic waste dump, is that it’s very easy to add more waste to the pile.  So that repo with a private key checked in might easily get an AWS credential, and a couple of months later, a raw database password.

Another is that sometimes, you might give the wrong people access.

What can you do about it?

  • Amazon’s IAM is incredibly useful at containing the material inside Amazon itself.  Use things like that.
  • Be prepared to burn credentials if they are compromised.
  • Rotating any toxic credentials stored in a repo also helps.

Cleaning up some toxic waste yesterday was pretty good.  That’s one less dirty secret.

Happy 2014

2013 was busy.  It’s hard to work remotely with people who are literally on the other side of the planet.  Remote helps explain why: there’s no overlap, apart from what overlap I make myself.

To make things more busy, we ended up buying a new Build Doctor HQ and moving from the country to the suburbs.  Moving from a cabin back to the spare room has it’s comforts.  Like plumbing.  There’s a lot of work to do on the HQ, but it’s nice to have a new long term project.

This blog is a long term project, too.  The last couple of years have seen it slide as I worked on other things and moved country.  It’s no longer sponsored, and pursuing sponsorship doesn’t work when I haven’t been posting.  I almost ported to Ghost, but decided that I’d take the simplest option of moving it to and letting the content speak for itself.

A benefit of the move is saying goodbye to the  www in the blog URL, which fixes a mistake made in 2007.  That and never having to do another plugin update.

Now I just need to find something to write about.

Happy 2014.  Have a happy and productive year, wherever you are.


News, August 12

  • Resharper 8 is out, making Visual Studio usable [link]
  • Also YouTrack 5, I’ve never had the pleasure of that particular issue manager [link]
  • I’d love to go to FutureStack, New Relic’s user conference [link]
  • Heroku have announced a lab of their pipeline support.  At Neo we have several apps deployed on Heroku, so I road tested it this morning.  Does what it says on the tin, and shows what commit went where.  There’s a challenge for some of the addon providers who offer a similar service.  [link]