Here’s Wednesday’s links from My Twitter Stream.
- 06:24: Visualizing Build Processes « The Electric Cloud Blog http://blog.electric-cloud.com/2010/06/29/visualizing-build-processes/#
CruiseControl was a fantastic brand for ThoughtWorks. You can understand why they decided to base the name for their new, commercially licensed Continuous Integration server on the old. That didn’t work out so well. The response to the confusion about the products has been interesting however: ThoughtWorks have renamed and reskinned the product.
I reached out to some of the ThoughtWorks Studios people for comment. Here’s Chad Wathington:
We built Cruise with the idea of first class support for build and deployment pipelines. We’ve renamed Cruise to Go because we really want to take that idea to the next level, to emphasize continuous deployment, to emphasize “going live” with software, not just continuous integration. Beyond the name change, we’ve added support for environment management and we’ve revamped the user experience to fit our deployment pipeline metaphor more strongly. We think people are going to like it.
It’s interesting that the company that is so strongly associated with developers is focussing on deployment. Encouraging, too.
Here’s Jez Humble:
Cruise 2.0 is our biggest and most exciting release to date. Not only have we added more functionality than any other release, we’ve also completely re-implemented the UI (in JRuby on Rails) to make it much easier to manage your build and testing infrastructure for large organizations. The big features in Cruise 2.0 are:
- First-class support for environments, so you can deploy any version of any application to any environment and manage multiple services that share the same environment (e.g. integration testing with a SOA). You can also use this functionality to partition your build grid.
- Templates so you can define reusable workflows for pipelines – useful if you have multiple projects that use the same workflow, or for managing branches.
- The most powerful test analysis on the market. Split your tests into suites and run them on the build grid, and Cruise will not only automatically tell you which tests failed, but (if some tests have been failing for a while) also which check-in broke each test, and who was responsible. There’s no need for specialized test runners – just tell Cruise where to find the reports.
- The ability to trigger a CI build with any revision from version control. If a pipeline has multiple VCSs, you can even mix and match versions from them – another facility that no other tool offers.
Cruise now also kills the process tree on agents when you cancel a stage, lets you run pipelines on a timer (Cron syntax), and lets you specify environment variables when you trigger a pipeline.
Sounds like an improvement. This product has come a long way since they were working on the first incarnation of their commercial CI server. They had to pack a lot of features into 2008’s release of Cruise, because the game changed around 2006. As Jez comments, there are now some unique features in their product. I’ve got a backlog of reviews to do as long as my arm, but I’ll try and get something done.
Update: Corrected an ambiguity in the last paragraph. It sounded pretty harsh.
(image via CJ Sorg)
It’s funny; you can almost correlate my going freelance announcement with the fall in posts on this blog. I can’t complain though; I’m fully booked, on work that I’m good at, with clients who are doing things that seem to matter.
To stop the blog atrophying totally, I’m getting some help. The (soon-to-be) Mrs Build Doctor is helping run the admin side of the business, making sure we’re legally compliant and getting the bills paid. Her help is invaluable.
We’re also very lucky to have Kushal Pisavadia join us for the summer. Kushal is starting a degree in UX and Maths this September. He’ll be helping out with a few of my many unfinished side projects, as well as assisting with my onsite work. Kushal has already made a significant contribution to a project (more on that soon), for which I am very grateful.
EJ Ciramella has also been invaluable with his series of Maven posts, writing relevant and thoughtful posts at precisely the time that I wasn’t able to. Thank you, EJ! I just hope I manage to share some thoughts from the day job, now that the day job is almost entirely relevant to the blog.
Silos are for Farmers, my QCon London talk is online.
The conference season this year managed to dovetail nicely with the birth of The Build Doctor Limited. This was nice in that I was doing a conference talk at the same time that I started my own company; evil in that I didn’t rehearse as much as I might have. It turned out okay despite my nerves at the start. It was a gamble asking the audience to participate, but that seemed to pay off: there was a kind of dialogue, which I find more engaging.
(A guest post kindly written by EJ Ciramella)
This is the first big shift in thinking. Typical (obviously, not ALL) Ant projects work by syncing some massive amount of code from source control, CD-ing into some top level directory and then telling Ant to build just the module you plan on testing. What’s so bad about this, you may ask? Well, for one, you’re likely syncing large amounts of code that you’ll never run locally. You’re probably building up (and unit testing, right?) packages that don’t change or the rate of change is very low as well. Ideally, these packages and libraries would be built for the user already. Now look at this through, say, a webdev’s eyes. If your webdev group is responsible for things like CSS, HTML or JSP changes, why should they be concerned about building up your oodles-of-utils package? Or, if a unit test starts failing on them (you’re unit testing, right?), why should they have to dive in and figure out what’s missing or broken. In a perfect world, any tier of development could be substituted for another (how great would it be if everyone knew everything?). In in the real world, the one with interruptions and families and deadlines, that’s unrealistic (especially in larger companies).
So decommissioning a monolith a few modules at a time is the best thing to do, once you’ve decided to go the Maven route. There are two ways of doing this work. You can take the atomic, all-or-nothing approach; going directly from a monolith to a more modular code base in one fell swoop. If you can get sign off on this, then this is a wonderful thing. I’ve had to restrain both myself and others from biting off too much. What I like to do is pull a few things out at once, maybe three to four modules. Of those four, let three be low cycling libraries and one a high cycle library. This way, people learn the new location of the parts that make up your libraries that are combined to make your deployable unit. Think of it more as an evolutionary process versus a revolutionary process.
Having smaller bite–sized chunks is also a better way to get to know Maven. If you introduce people to a massive monolith with customizations all over the place with a dozen attached assemblies, then people are going to poke at it with a stick and hate it quickly. Clearly seeing how a web application goes together and the resulting artifact is created is much more digestible, and you’ll get fewer complaints about your Maven implementation.
Another fear that emerges as people start considering modularization is with multiple deployable units, how does anyone know what is compatible with other internal code when the process isn’t always building the same thing all time? Well, that can be answered a few ways, but the simplest answer is once a library is released (or otherwise frozen) for a deployable unit, then that deployable unit need not upgrade its version of this library. If shared functionality in the library changes, then you will have to retest, but that begs the question – is your application code in the right module? Shouldn’t a shared module stay somewhat generic and each deployable unit extend/implement those features instead of baking-in that logic at such a low level? I’ve found that over time, if you make the library a separate module from the larger deployable unit builds, the code starts migrating in the correct direction versus where ever it’s easiest to add it (no more massive search/replaces in the code base via an IDE).
Once everything is pulled out, there may be confusion on the developer’s part with regard to which modules should be built in which order. This, to me, is an educational thing. At any point, the developer can run a “mvn dependency:tree” and see:
– What dependencies make up their project
– Where those dependencies were resolved from
– What order they are needed to be built in
When moving from a world where people operate at a very high level directory and build everything to a world where every module is very light weight and each move is a tactical one, people often don’t know how to get that app server up or that daemon running locally. With every application as its own standalone build, people just need to sync what they want to run and rely on a repository manager for the rest (app server bits, database bits, etc).
A repository manager is part of the Maven 2 process, end of story. Trying to use a corporate file share or keeping everyone working in offline mode is just not the Maven way. Using a repository manager also helps to minimize configuration people will have to manage locally in their settings.xml as well as help enforce the Maven way of life (banning redeploys, pushing releases to one repository and snapshots to another, not deleting artifacts, etc.). With something like one of the big three (Nexus, Archiva, Artifactory), you simply have a grouped repository everyone points at. That “grouped” repository is a representation of all the other repositories your company will use. This way, you can have something like this:
nexus-test * http://server/nexus/url nexus-test central http://central true true central http://central true true
And that’s it. This one setting covers every remote repository we use – from Codehaus to Repo1. If you’re really ambitious, although Sonatype doesn’t recommend it, you can even tidy up the url so if you switch repository managers, devs don’t need to touch their settings.xml file. While this configuration can be rolled into the MAVEN_HOME/conf/settings.xml file, I personally like to keep my configuration maven-version-independent (putting this in my user home directory’s version of settings.xml).
Everyone has one little dark corner of their build world. Usually it was some quick hack to make things work through Ant. Possibly a custom task or maybe some shell-out, or something crazier. These little dark corners should have light shed on them, in fact, flood them with light. Instead of letting this be a choking point, start by looking at the common repositories for plug-ins that do what you’re looking for. There are very few problems someone else hasn’t already solved and even if you searched a while back, a solution may exist now that didn’t then. In the past, I’ve done exhaustive searches and found no plugin that suited my needs. Then I find some plug-in has changed a few months after to do exactly what I was looking for, or someone wrote one and contributed it back to google/codehaus/repo1. If that route fails for you, just build a Maven 2 plug-in and deploy it to your local repository. You can even have a transition period where Maven calls Ant to do just this little bit, then move the Ant tasks inside of Maven 2, then finally migrate to a Maven 2 plug-in. Don’t use the argument that “you should be writing code, not a Maven 2 plug-in”. Do you want your system to be robust and clear when there are successes and failures? Then write the plug-in. You can start quickly by typing “mvn archetype:generate”, then select “maven-archetype-mojo” (option 12 as of this writing).
The original cruise control‘s Maven integration was very poor (I’d say the opensource version is still pretty bad). It doesn’t understand the different life-cycles or the output from each. Hudson understands the life-cycles, and will inherently do things depending on what it sees in the build output. So far in my travels and exploration of various CI servers/tools, Hudson is head and shoulders above the rest with regard to Maven integration. Have site output you’d like to share? Maven can publish that quickly with a link off of your project’s page. You have artifacts you’d like made available to another downstream job (or later process)? Hudson picks up on those artifacts and tucks them away (maybe/maybe not to your liking). All other products need to have these various things called out, you need to tell them, “look here for this tar.gz file” versus knowing based upon what Maven has logged.
Here’s another big disconnect, you can’t just fling a new version of Maven down like you could with Ant. With Ant, you could generally look at the release notes, then install and add your custom tasks and then build. With Maven 2, for the most part, you’re protected from a lot of things, but you also need to watch for plugin versions, core changes to dependency resolution, etc.. I personally sleep better installing locally and building, then diffing against the artifacts that are generated by the build server. Some changes to Maven (2.0.5 to 2.0.6) required users to review their dependencies for example.
Well, that question is best answered by you, dear reader. If you can modularize your codebase, then you’ll see the biggest improvement both in development time (better throughput) as well in stability – no more broken unittests that when fixed, reveal more broken unittests ultimately convincing all developers to turn them off. If you can’t (or don’t see any benefit), I’d submit your development team isn’t mature enough to realize the many benefits to a highly modular codebase. In the end, you have to choose what gets product out the door.
(image via ePublicist)
I felt I had to attend the next talk in the main lecture room, simply to find out what its title meant. Patrick Debois and Julian Simpson’s presentation was entitled Hudson hit my Puppet with a Cucumber …
I think that’s some of our best work. (via UKUUG June newsletter)