Monthly Archives: July 2009

Cruise Interview with ThoughtWorks Studios

TW Studios peeps Chris Read and Andy Yates graciously agreed to have a chat with me about Cruise last Friday. We talked about some of the features in Cruise and why they matter, the difference between Cruise and your average development focussed CI server, and shot the breeze about the maturity of our industry:

It went pretty well, considering just how shattered I was. I managed to edit out the bit where I set off the timer that they use for interviews. Part II on Tuesday, August 4.


Tagged , ,

Cruise != CruiseControl

Newsflash: there are three versions of CruiseControl.

If someone initiates a discussion about CruiseControl, it’s always smart to clarify exactly which version they mean.

To make things worse, all of the above can be referred to as ‘Cruise’. Cruise is ThoughtWorks Studios‘ CI server, and it’s nothing to do with any of the versions of CruiseControl. I had a snoop around when they released version 1. There was one component (the AntBuilder) that they legally used from the open source CruiseControl (the Java one). That’s now gone. It’s a totally different tool.

There had been plans to take the open source project private (it’s been discussed for years), but those were shelved in favour of a new tool with different goals. Say what you like about it Cruise; just don’t go thinking that it’s any of the open source projects with a new license and some bling.


Tagged ,

Huge Discounts on Cruise from Thoughtworks

ThoughtWorks Studios released version 1.3 of Cruise, their Continuous Integration product last week. You can get a huge discount on it via this blog. (keep reading for more detail)

Thoughtworks Studios Cruise
What’s new?

  • They’ve added the ability to run multiple agents on your enormous old-school build server.
  • They’ve shortened the feedback loop with custom email notifications based on your criteria
  • You can now see the difference between two pipelines (they now create a new pipeline instance for each trigger)
  • Wildcard artifact uploads
  • Git submodules and branches support
  • Support for SUSE Linux
  • Ability to run cleanup tasks on agents when a stage is cancelled
  • Warnings on low disk space
  • Cruise Professional lets you group pipelines together and control who has view and operate access to these groups, as well as allowing multiple source control systems of different types within a single pipeline

I’ll be doing a more formal review and perhaps a less formal interview with some of the ThoughtWorks Studios guys in London. In the meantime, how about a discount? When I visited ThoughtWorks recently I asked them what they would do for readers of the blog. I was expecting to get an offer of some swag to give away (every ThoughtWorks office has a stash of swag). I even asked for some copies of the ThoughtWorks Anthology to give away. But they came back with a 35% discount for Cruise Professional.

If you want to get a huge discount on Cruise, you need to send an email to studios@thoughtworks.com with the promotional code TWSBD on the subject line. Do it now, because this promotion is limited to two weeks. You can download an evaluation copy of Cruise here. If you’ve contacted them with the promotional code, you’ll be eligible to buy a copy of Cruise Professional at 35% off of list price – even after the two week period is over.

I’m very happy to negotiate a deal for readers of this blog. Not every product will be a good fit for your organisation, and I’d rather shut this blog down than encourage people to buy software that wasn’t helping them – we see too much bad software every day.

Having said that, some of the features of Cruise are helpful. What I like is the artifact repository and the deployment pipeline. We all know that deployment to a test environment should be at the touch of a button. Cruise allows you to do this by tracking all the states through to production. If you buy Cruise Professional you can assign rights to the stages so that your PM can’t kick off a production deployment. Useful.

You can also test builds on multiple platforms, and dynamically distribute work across the build grid. In general I think the approach is pretty mature, and ThoughtWorks Studios are really gunning for the “last mile” of development – where investment in automation is rare, and can have a great return.

Notes and conflicts of interest:

  • This offer is valid for two weeks (expires Wednesday July 29, 2009)
  • I used to work for ThoughtWorks
  • I was hired by some of the Studios people
  • I hired some other Studios peeps
  • They put me on their referral programme to get you this price. I will eventually get a small commission payment if you place an order.
  • More motivating would have been spending of Roy’s air miles on a trip to Peet’s Coffee for my better half and I. Oh well. Jez has promised me a Monmouth in August.


Tagged ,

New Continuous Integration system

I got a mail a few weeks ago from Mr Hericus, inviting me to find out more about Zed Builds and Bugs Manager.

In his words:

We’re relatively new to the market (about 1 year out at this point), and our focus is total team integration. It’s not enough to simply integrate your software on a continuous basis. You have to work in your bugs, and your team communication as well. That is the focus of Zed Builds and Bugs Manager. We provide Continuous Integration, Bug and Task tracking, Discussion Forums, and Wiki collaboration all in the same web-based GUI and sharing the same back-end database. This means that each of these systems works with the same data and can leverage that to provide a truly integrated environment.

Sounds interesting. They are going for the integrated tools market. Launching yet another Continuous Integration server isn’t going to earn anybody brownie points. It’ll be interesting to see where they end up in the market. They’ll be competing against some major players, but they might be able to convert some of the teams that still don’t do CI (yes, they still exist in droves).

Furthermore:

Zed Builds and Bugs works with any size team, and scales from small projects to multiple large projects easily. It is also available to small teams for no charge, so that even if you are a team of one and just starting out or working on a private project that will someday be a world-wide best seller you can still have access to a great tool to help you organize your development and gain efficiencies in your process.

I already owe another firm a review, but I’ll be having a proper look and reporting back when I can. Until then, welcome to the market, team Zed!


Story: Fan or Die

David Goh contributed this beauty:

Back in 1994 or so at my first job, we were building some train control system software. We had a series of mainframes that we were using to test the software on, which all lived in a small room with no extra airconditioning. The only point of transmission between the mainframes and the development PC network was a single ancient 286 with its case off.
After compiling your software, you submitted it into a network directory, where the 286 would find it and start copying it to the mainframes. During summer, or indeed on any slightly warm day, you then ran from your desk, down the corridor, into the overheated “machine room”, picked up the handy piece of cardboard, and waved it vigorously at the CPU until your transmission was complete. Failure to arrive at the 286 and start fanning within seconds of submitting the copy job meant your transmission would fail as the poor 286 would overheat and die.
Oddly enough, younger and healthier developers tended to have more success at getting their builds sent to the mainframes and tested. 🙂


Story: Wolf++ fixes the Deployment Process

The last of the top three stories for the Giveaway was this one from Wolf++. Enjoy!

My last gig I was hired on to be the build guy. On my first day I sat
shotgun to their deployment process. The manual process was as
follows.

1. Logon to the ‘build box’
2. Get latest
3. Open visual studio and compile the application
4. FTP the resulting app to a staging area on our production webserver
5. Put the website offline
6. Run any new SQL files against the production database (Hopefully
you guessed the execution order correctly)
7. Copy the app into place
8. Put website back online
9. Hope nothing had broken.

I had quite a task ahead of me. It turned out not only was I the buy
guy, I also had to manage the development and QA servers, help the QA
staff with even the smallest technical hurdles (including
demonstrating many times how to use FTP, etc), help developers with
source control, teach developers it works on my box is not an excuse,
etc. We had one big shared database with a number of different
applications that had no deployment schedule other than ‘yesterday’
and as soon as it’s ready. Quick fixes went straight to production!

The first thing I tackled was how to divine which SQL scripts were
needed with which application. I created a file in each project that
developers would list source control locations of the SQL DDL files
that needed to be executed. It was a simple run first to last
ordering. Along with this I created a tool to read and package the
files along with a script to execute the package against each
environment. This became a part of the automated build process.
Later I even created an editor with syntax highlighting, source
control integration, and validation to easily construct this file.

Since all the applications shared a single database (a flaw they were
going to fix someday), I finally convinced them to deploy all the
applications together. So any changes to the database would be tested
in all the apps at the same time. Builds would first be tested on the
QA server and only fully vetted builds went into production.

I brought in FinalBuilder which I used to construct build scripts and
implement continuous builds. Quick build break feedback is a must.

Prior to this job I never had to work on live websites. I can say for
certain now working on live websites ratchets up the pressure. The
desire to improve the build and deployment process correlates directly
to the desire to not break services a large global customer base use.

Tagged

What is a Maven POM?

Why I am asking about Maven POMs? I do know, honest. But I often read blog posts or articles about Maven and wonder about the language being used. There’s some very domain specific terms in use here. So I decided to offer some definitions for people starting out with Maven.

Today, we start with POMs. I asked Maurizio Pillitu of SourceSense to define the term for me.

The Project Object Model is a description of your project; it is exactly a model, as stated in its name, it is the breakdown of all the elements of your application: dependencies, repositories, SCM connectors and so on. In a POM you never implement the behavior, you just configure it, using a set of predefined conventions (like src/main/java for sources, src/site for documentation …) The POM acts as a template to finally generate an *Effective POM* which expresses all the executions of all the plugins that must take place during the build.

So a POM is telling you about every major entity in the project. Which Maven needs to know about in order to build the project. I’d summarise a POM as a kind of manifest or packing slip.

Cue man in blue coat:”Yeah, it’s all here. 5 source trees, two version control systems and a bunch of Selenium tests. We ought to have that built by Wednesday.

Maurizio continues:

The POM is one of the biggest evolutions that Maven contributed to the build tooling universe; in the past we were used to implement the behavior of the build; now we describe the prerequisites, assuming that we know what the behavior is, tweaking it a little bit with some configuration.

Which is a really interesting point. If you think Ant or one of it’s ports is a declarative tool, wait until you see a Maven POM file.

Shouts due to Maurizio and also John Smart who did a great Maven talk on Monday night. The M2Eclipse tool will save you from the horror of XML. It’s a full visual editor for POM files (and more).

Image by Tavallai


Tagged

Story: Daniel’s Continuous Integration System

Daniel Spiewak gave us this great story for the Atlassian Giveaway. He’s earned a t-shirt. Have a good weekend, where-ever you are.

It was a dark and stormy night. No, actually it was a plesant summer day, but daylight lacks a certain dramatic flare which is so necessary for a good story, especially a story about build systems.

I was working as the semi-lead developer for a mid-sized project run out of London, UK. My job was primarily to work on the Java clone of the Cocoa client application. Through a very clever and dangerous bit of hackery, the Cocoa and Java clients shared a single, Java-based backend which communicated with the server via xml-rpc. Because of the project’s architecture, there were a number of inter-dependent sub-projects. As I was working on a clone of the Cocoa client, it was often necessary for me to build a new copy of the client after each new change. However, this was, in and of itself, a non-trivial undertaking. Once you added the building of the other sub-projects both individually and as dependencies, and my days started to look more and more like the “dark and stormy” variety.

Now, each project (with the exception of the Cocoa frontend) had an Ant build script which I had carefully crafted. These build scripts would invoke each other as need be, meaning that I could build a copy of my Java clone by simply invoking its build and allowing Ant to handle the rest. This solved a lot of my dependency headaches, but building every single project was still a tedious undertaking. Thus, I build another layer of abstraction above Ant, consisting primarily of Ruby scripts hacked together on top of Ant. The idea was that I could invoke my master build script, passing a list of project descriptors, and this build script would determine the optimal way to build each project and its dependencies. I was even able to rig this script with cron so that it automatically built a new version of each sub-project as necessary.

Unfortunately, this build script worked a little too well. My boss got wind of it and decided that it should be put onto the main development servers as a sort of continuous integration solution. This sounded like a good idea at the time, but it ultimately led to far more trouble than it was worth. I got sucked into the position of permanent build system maintainer; and, given the hacky nature of the system’s architecture, it ended up being quite the position. As more sub-projects were added and more flexibility was needed, I actually had to rewrite the entire system from scratch more than once. Looking back, I’m actually astonished by the sheer number of hours I spent cursing those scripts into behaving.

I was probably on my third or fourth rewrite before I realized the idiocy of what I was doing. I had literally created a full continuous integration build tool from scratch (complete with web admin system and XML-RPC API) without ever considering the alternatives. It only took me a few minutes of research to realize that Hudson, Cruise Control, Bamboo, and really any CI system would solve exactly the same problems without any need for hacky scripts or unnecessary effort. It took my boss a little while longer to come around, but eventually he too saw the wisdom of relying on a pre-existing solution rather than rolling our own convoluted hack job.

The really amazing part of this story is how I didn’t even see what I was doing until very late in the process. It started out as just an innocent collection of scripts to aid my own development process. Each step I took toward hacky maintenance hell was so gradual, so subtle in form that I completely failed to see where I was headed until I was already there. An while my build system didn’t actually require a defunct Pentium series processor to run, it does certainly certainly qualify as a bizarre polyglot, home-grown build system which should never have been allowed to fester.

Tagged

Maven 2.2.0 is out

Lot of fixes and some new features. This is the week of new developer toys, isn’t it? It’s not over yet, either. My spies tell me something else is on the way.


Tagged