Monthly Archives: December 2010

Patterns without Developers: Spa conference workshop

I did a workshop at the fantastic Spa conferencee this year. The purpose was to try and gather patterns in the software development process that didn’t come out of the GoF book or PoEAA.

This is the output from those sessions, so late that I will at least do time in purgatory, if not someplace else. The session went OK. The slides didn’t match the handouts and I managed to invite some heckling, but I was quite pleased to see everyone get down to work and start making patterns. The paper forms I got back didn’t really get as much detail as my shepherd and I anticipated; and I’m giving some names and embellishing where appropriate. Comments below each one.

Name: Embedded Test Team

Definition: Separate teams mean handovers which always impede progress. Instead, embed testers with development team.

Big teams encourage silos and specialisation. Specialists are important but in this pattern you put them right where they are needed.


Name: Blue-green deployments

Definition: On deployment, delay to the slave server of a master/slave pair and then switch traffic routing

Intent: Keep production live at all times. No downtime on release.

The name of this pattern comes from Continuous Delivery [ Get the eBook *]
. It really depends on patterns like Encapsulate Table With View, or a NoSQL database to ensure that you can deploy two application versions at once without causing one to throw database related errors. The most successful use of this was on a project where the client insisted that we use stored procedures to access the database. While it was a big productivity hit for the developers, there were seconds of downtime when we released a new version. [more patterns in Refactoring Databases (eBook)<img border=0 width=1 height=1 src="http://ad.linksynergy.com/fs-bin/show?id=hdK31Ny9YPU&bids=145238.1639474&type=2&subid=0&quot; *]


Name: Cookie Cutter Servers

Definition: Deploy servers as images or automatically built machines, rather than manual or evolved installations.

Intent: Keep consistency between different servers both in production and in test.

If you look for deployment related technical jobs, many of them would have you trying to enforce consistency on different environments, all of which are maintained by hand. This is nuts. Automation is your friend.


Name: Simplicator (Freeman / Pryce)

Definition: Define your own API that can be implemented by a stub for testing or by an adapter,

Intent: Decouple service consumers from providors to make testing more deteministic.

This is published in Growing Object-Oriented Software, Guided By Tests [eBook*]


Name: A/B Deployment

Definition: Continuous deployment to a limited subset.

Intent: Gauge whether new version is an improvement on the old before committing all users.

Applicability: Requires ability to operatemultiple versions in parallel. (Interface versioning!)

Timothy Fitz covered this, but I’m stealing the name from Split Testing until I find a better candidate.


Name: Virtualisation

Definition: Combine VMs to simulate systems in the operational environment

Intent: Create a scale model of the production environment for functional testing.

Motivation: An operational test environment is needed for end-to-end testing with all external systems that the software has to interface to,

Applicability: Operational procedures, testing of response to various events.

I know quite a few people who work as developers at banks. It takes six months to get a server approved and delivered to the point that you can use it, but a mere matter of weeks to get a virtual machine done. That’s reason enough to do it. Let alone the ability to manage change and develop against realistic hardware. This becomes more compelling if you run Linux on server and desktop and can deliver virtualised nodes without the added hoops of licensing. You might get a VM in days if you carry on like that.


Name: Time Slicing

Definition: Bring down test environments when their testing time-slot is past.

Intent: Reuse test hardware for maximum utilisation of the investment

Applicability: Testing reqirements are sporadic and can be resource-levelled over time

Usage: Relies on virtualisation.

The group that made this and the previous pattern were on fire. Another benefit of virtualisation is stopping the horrid project delays because someone has booked out a test environment for months. You’d think that a major constraint like that would be part of the project planning process, but I find that most organisations prefer to have projects managers duke it out, fight-club style.


Name: Stubs

Definition: Ensure a system is tested againststubs and other systems in the environment before it is tested in a full pseudo-production environment.

Stubbing out external services can rock. It’s all about feedback. If your code fails talking to the stubbed services, it’s not ready for primetime. Why wait until you manage to get your app deployed (a battle in it’s own right in big environments).


Name: Atomic deployment

Definition: Ensure that your deployment is transactional: either everything gets deployed, or nothing.

Intent: Ensure that everything that needs to be deployed is deployed or nothing.

Harder than you’d think to get going (can you really roll back that database change? what’s the risk of doing that?) but worth the effort. Artifacts of a failed deployment can cause things to break in ways that are difficult to diagnose.


Name: Configuration Repository

Definition: Eliminate embedded service identifiers, addresses, sizings.

Intent: Abstract parameters out to a configuration service so that:
– It’s easier to manage
– the identical software can be deployed to every environment without adapting.

This pattern has been talked about for a very long time but not often done. Chris and Tom had a go with Escape, and you could probably do a decent job with couchdb. Not having to deploy configuration to each node: priceless.


Name: Synchronised release schedules

Definition: not given

I’m not sure what this was in aid of. If you have dependencies between different applications then you should be managing releases at the programme level. Sadly, many people don’t do that.


Name: I know what you did last deployment

Definition:

Intent: -Get into a known state
-Automate what you do = consistency
-Version control is the key! Both deployed app and deployment script.

Pattern: 1. Mirror production
2. Automate deploy AND rollback
3. Tests against deployment automated

The group that did this struggled with trying to fit many patterns into one, I think. The key takeaways for me are the use of automation and version control. Which shouldn’t surprise anyone – I think they were trying to address the disparity between the way we treat business code and the way we deploy it. Only recently have we seen anything like Joel Tests for this kind of thing.

* If you buy the ebooks via these links (and frankly in the year of the iPad, who’s buying dead tree media?), then commissions go to my Kaffiene and Taylor Street Baristas habit. That’s right, you’re financing my drug dependency.

Tagged ,

In the brain of Patrick Debois: London, January 27

Patrick Debois kicked off the first DevOpsDays conference (and in doing so, came up with the name). He’s coming over from Belgium in January to talk about DevOps at SkillsMatter. Patrick is a master at making entertaining presentations, and a nice guy to boot. I highly recommend registering for the talk now, regardless of your job title. There will be drinks in Clerkenwell after.

Link

Tagged

Links for 2010-12-14

Here’s Tuesday’s links from My Twitter Stream.

Go (the ThoughtWorks one): Interview with Jez Humble

In response to a recent article, I spoke to Jez Humble about Go:

Me: Who are you, and what do you do for Go?
Jez: I wear a couple of hats: I am product manager for Go: ThoughtWorks Studios’ CI and release management platform, and I also talk a lot about how to deliver software. I recently co-authored a book called Continuous Delivery with Dave Farley that has beenvery well received

Me: What happened to Cruise? Why the name change? Jez: Well when people heard “Cruise” they almost always thought “CruiseControl”. But Go is a different beast – it’s built from the ground up to be a platform for continuous integration and release management that provides visibility on the production readiness of the system to everyone involved in delivery – not just developers, but testers, operations people, managers. With the release of Go 2 it also scales up to even the biggest organizations, unlike Cruise, which was essentially a workgroup level product. For example Go 2.1, which we just released, allows you to provide CI as a service – you can control who can access which boxes at the server level, but projects can be granted administration rights over their own CI configurations, even if they can’t trigger or even view other projects’ builds.

Me: Reading the Go website, it’s clear that you’re going after deployment in a big way. For example, you need to read up to see that it does Continuous Integration. The Build Pipeline has been renamed to Deployment Pipeline. Is this a deliberate strategy to deprecate Continuous Integration and promote the newer meme of Continuous Delivery, and are you leaving the market for development-only tools alone?

Jez: Well, I should say that we always called it the deployment pipeline, both in the book and in Go. Of course “build pipeline” was always a commonly-used variant within ThoughtWorks. But yes, you’re right that we are trying to focus holistically on the problem of delivering software, not just on the development space. So continuous integration is still important, but – as the DevOps movement correctly asserts – unless you work to change the rest of the organization, and in particular the relationships between development, testing, and operations, you’re not actually going to see the results that you want to. In most places, the main delivery risk isn’t in getting the software dev complete, it’s in the “last mile” from dev complete to release.

I wouldn’t say we’re leaving the development space alone: we have Go Community Edition, which is free to download and use, and offers some really powerful tools that none of the other free products have. For example there is built-in test reporting that lets you throw a bunch of automated tests at the build grid and tells you which ones are currently broken, which check-in broke each one, and who was responsible for that check-in – i.e. what’s broken and who broke it. We’re going to continue adding stuff to the Community Edition to keep it the best possible tool for developers, based on our own experiences dog-fooding it – we upgrade our own Go system with every good build.

Me: Another vendor made the point that their competing product was able to accommodate many kinds of project, while Cruise is more prescriptive. Is he correct? And which of accommodating or prescribing would be a feature, in any case?

Jez: In every tool there’s a trade-off. So while “prescriptive” vs “flexible” sounds like an obvious win-lose, there is a cost to optimising for flexibility and not making any prescriptions. Go has a unique model which is optimized for modeling your organization’s value stream from check-in to release, so you can visualize, trace, and control the flow of builds through that value stream. We excel at that, and we don’t really care what your value stream looks like – in that respect, Go is extremely flexible, and you can have manual steps where you haven’t automated stuff yet. But it gives you a unique and incredibly powerful view into your delivery process that really facilitates collaboration between dev, testers and ops (see this video from the people at MoneySupermarket), and makes it much easier to find out where the bottlenecks are in your process.

The classic process-chaining model that the other tools follow is very flexible, but it’s hopeless at getting any idea of the bigger picture of what’s going on in your organization, or seeing where a particular build went, or tracing back from a particular deployment back along the process it came through and who touched it. As a result of its model, Go is somewhat opinionated about how you do things. For example it expects you to promote binaries rather than source code. But I believe that’s a good thing – the tool should make doing the right thing easy. In practice you can model any sensible process with Go, and once people grok the model, they invariably love it.

Me: What’s your strategy for 2011? Will you be attempting further integrations with Twist and Mingle, or adding any new tools?

Jez: We don’t plan to launch any new tools – our hands are pretty full with the three we have. But yes, we will be working on tighter integration between them. Go and Mingle are both becoming OpenSocial enabled. Go 2.1, which just came out, has a pipeline widget that can be hooked into Mingle 3.3, which is just about to come out, and in Go 2.2 you’ll be able to see the Mingle card activity – fixed bugs, completed features – between two deployments.

Go is also now far enough ahead of the curve that in 2011 we’ll be pushing out some oft-requested features that aren’t differentiators, such as clicky admin for UI, support for more version control systems via a plug-in API, and various other bits of integration with the wider tool ecosystem. We also have a bunch of other, more innovative stuff planned that I am going to keep secret for now – we intend to retain plenty of clear blue water between ourselves and the competition.

Me: Now that ThoughtWorks uses Go in the field, do you get many feature requests from ThoughtWorks developers?

Jez: ThoughtWorks developers are our harshest critics – far more so than our external customers. Once you’ve developed a skin as thick as mine this is an invaluable asset, because you know if you can make them happy, you’re going to really delight your other customers. We’ve made a sustained effort to engage with ThoughtWorks projects from the beginning of Go’s development, and it’s really paid us dividends. We are running on huge, distributed projects with over 100 developers and enormous build grids, and a lot of our requirements – functional and non-functional – come from them, because they’re at the bleeding edge of what’s possible with Agile and Lean
methodologies.

Me: Finally, can you name one feature of Go that you believe to be unique?
Jez: We have two !

First, Go is also still the only product that fully supports deployment pipelines. These let you model your delivery process from check-in to release, and provides visibility and traceability into the movement of inventory, in the form of builds, through your build, deploy, test and release process. They also give team members control over the process, allowing them to self-service deployments into the environments of their choice.

Second, our test analysis stuff. Throw a bunch of tests at the build grid, and Go will collect the results, analyze them, and tell you exactly which tests are currently broken across the whole suite, which check-in broke which tests, and who was responsible for the check-ins. In 2.1, you also get the stack trace and the check-in comments at the click of a button. This is a game changer with large suites of automated tests, and nobody else does it. Supports any tool that outputs junit-format test result files, including junit, nunit, Ruby test/unit, and Twist. I’ve attached a screenshot.

ThoughtWorks Go

Sounds like they’ve come a long way from CruiseControl. I’m talking to another vendor soon.

Tagged ,

Eating Dogfood: Team City

One of the things I first noticed about Team City is how thoughtful the features were: you can see that a build had failed, and stop it so as to not waste bandwidth. It seemed that they were actively using the product as they developed it.

I spoke to Yegor Yarko in the wake of the comments made by Electric Cloud, and he had this interesting comment to make (entirely his opinion and not that of JetBrains, who he works for):

JetBrains is a geeks company and we create products that we use ourselves. This makes us feel the features (as well as product’s strong and weak points). And we strive to improve the product measuring this by our own and our user’s experience. We also try to be open to our users: public issue tracker, developers in feedback loop, early access builds, etc.

We do feel that developers need to get smarter information from the tool and we bring it to them. We do feel that the CI process should be integrated into daily developer’s work: thus the IDE integration. We need the thing to be easily (re)configurable, so here is an administration UI. We need quick access to every single piece of build information from UI: so here it is. We manage our 70+ farm of build agent machines: so there are many administration-related features inside. We do branch, we do… and so on. Frankly, we have so many more ideas that the question is not _what_ to implement, but what to implement in the first place.

I’m not surprised that they use their own product, but what is interesting is the way they scratch their own itches. As a self-professed geek’s company, they are their own focus group.

Tagged ,

JavaScript BDD, with Jasmine, without a browser

I’ve been test driving the domain of my build radiator, XFD with the lovely Jasmine BDD framework for JavaScript. Jasmine is lovely. Browsers aren’t. Spawning a new browser to run your tests has issues for me:

  • Spawning a browser takes time and ruins my flow,
  • I’m trying to drive out logic – having a browser present will lead my design to un-natural couplings, and
  • It makes Continuous Integration that much harder

So I started investigating what I could do with Rhino and Envjs to make testing with Jasmine more awesome. Ingvald Skaug had been there before. It took me some time to really understand how the pieces fit together works, so I thought I’d expand on it.

Step 1: Check that Jasmine is working

I would have saved so much time if I’d started with this bit. What you need to do is download the core of Jasmine and stick it in your project. I started with the Jasmine RubyGem that spawns a browser and does the plumbing, but for this it’s back to basics. In my project it’s checked in at lib/jasmine-1.0.1. You need an HTML file to reference all the scripts and kick off the tests. Here’s an example derived from the Jasmine docs:


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
  "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
  <title>Jasmine Test Runner</title>
  <link rel="stylesheet" type="text/css" href="lib/jasmine-1.0.1/jasmine.css"></link>
  <script type="text/javascript" src="lib/jasmine-1.0.1/jasmine.js"></script>
  <script type="text/javascript" src="lib/jasmine-1.0.1/jasmine-html.js"></script>
 
  <!-- include source files here... -->
  <script type="text/javascript" src="src/Player.js"></script>
  <script type="text/javascript" src="src/Song.js"></script>
  
  <!-- include spec files here... -->
  <script type="text/javascript" src="spec/SpecHelper.js"></script>
  <script type="text/javascript" src="spec/PlayerSpec.js"></script>

</head>
<body>
  
<script type="text/javascript">
  jasmine.getEnv().addReporter(new jasmine.TrivialReporter());
  jasmine.getEnv().execute();
</script>

</body>
</html>

In my real project I generate this file at build time from an ERB template, to make sure I get all the source files and tests. However you do it, make sure it works in a browser first. Yes. Really.

Step 2: Get the bits that you need

In my lib directory I have:

  • js.jar – which is the Rhino implementation of JavaScript. I already used this to run JsLint as part of my build
  • env.rhino.1.2.js – which is Envjs – a DOM implementation written in JavaScript.
  • jasmine.console_reporter.js, jasmine.junit_reporter.js and envjs.bootstrap.js – all from Larry Myers’ excellent Jasmine Reporters project. Jasmine Reporters is really what glues everything together.

Step 3: Wire up Jasmine Reporters

You can have many Jasmine reporters wired up in the SpecRunner.html. In this example I’m leaving two in – the TrivialReporter that gives HTML/CSS reports, and the ConsoleReporter, which we’ll use later. Here’s the edit to the SpecRunner file now:


<script type="text/javascript">
  jasmine.getEnv().addReporter(new jasmine.ConsoleReporter());
  jasmine.getEnv().addReporter(new jasmine.TrivialReporter());
  jasmine.getEnv().execute();
</script>

Step 4: Put it all together

Here’s where it all happens. In my example I use a shell script but in real life the Rakefile that generates the SpecRunner file also fires up the JVM and checks STDOUT for error messages.

#!/bin/bash
java -jar lib/js.jar -opt -1 lib/envjs.bootstrap.js SpecRunner.html

envjs.bootstrap.js is worth examining, too:

load('lib/env.rhino.1.2.js');

Envjs.scriptTypes['text/javascript'] = true;

var specFile;

for (i = 0; i < arguments.length; i++) {
    specFile = arguments[i];
    
    console.log("Loading: " + specFile);
    
    window.location = specFile
}

This file takes the list of HTML files that you give it and tells the fake browser inside the JVM to load each one. Jasmine then fires and runs your tests:

jsimpson@curie:~/Documents/workspace/jasmine-rhino-envjs$ ./jasmine 
[  Envjs/1.6 (Rhino; U; Linux i386 2.6.32-26-generic; en-US; rv:1.7.0.rc2) Resig/20070309 PilotFish/1.2.13  ]
Loading: SpecRunner.html
Runner Started.
Player : should be able to play a Song ... 
>> Jasmine Running Player should be able to play a Song...
Passed.
when song has been paused : should indicate that the song is currently paused ... 
>> Jasmine Running when song has been paused should indicate that the song is currently paused...
Passed.
when song has been paused : should be possible to resume ... 
>> Jasmine Running when song has been paused should be possible to resume...
Passed.
when song has been paused: 4 of 4 passed.
Player : tells the current song if the user has made it a favorite ... 
>> Jasmine Running Player tells the current song if the user has made it a favorite...
Passed.
#resume : should throw an exception if song is already playing ... 
>> Jasmine Running #resume should throw an exception if song is already playing...
Passed.
#resume: 1 of 1 passed.
Player: 8 of 8 passed.
Runner Finished.

There’s also a JUnit compatible XML reporter, courtesy of Larry. This lets you make the Continuous Integration server report test results as usual.

Summary
I’m very impressed. All of my tests that used to run in the browser run headless, with some fiddling of paths. I’m using the Jasmine JQuery plugin, which probably saved my bacon on the test that is too tightly coupled to views. I’ve collected the example on GitHub.

Props to Ingvald, Larry, and the Jasmine, Rhino and Envjs teams. You guys rock.

Tagged ,

Links for 2010-12-06

Here’s Monday’s links from My Twitter Stream.

Vendor news, 3/12/2010

Go West: There’s a new release of Go. Features include authentication on pipelines, and enhanced templating and reporting.

We built this team city on rock and roll: New Team City release. With overdue Maven improvements, bundler support, and a safer upgrade procedure (that one used to hurt).

Like thunder and lightning: Electric Cloud have been busy. There’s support for Android and Visual Studio in Electric Accelerator, and they’ve reduced a lighter (and cheaper) version of Electric Commander. I spoke to EC’s Usman Muzaffar about that:

Me: Why does android support matter?
Usman: All the mobile device providers (most of whom are Electric Cloud customers) are doing Android development of some sort. Here’s a new platform, with a new codebase, and extensibility points, and some (relatively) new tools: Git, Gerritt all wrapped into some really old tools (linux, gcc, Make) and has the classic problem: how do I set up a production/release infrastructure that’s fast, easy, flexible and scalable? Electric Cloud solutions attack this problem head-on by making sure that dev teams doing Android can spin up the software factory to efficiently build/test/release product without asking them to jettison either the new cool tools or the old reliable ones. We also provide integration to support shared pools of test devices which are accessible to the build system.

Me: Is the market for Android devs big enough to worry about?
Usman: Yes! As noted above: all of our biggest customers have multiple projects of varying size underway. Android has clearly established itself as a critical platform that major handset, telecom and networking shops are watching carefully to ensure compatibility with their part of the stack.

Me: What’s the difference between the full Electric Commander edition and the workgroup edition?
Usman: Size of the deployment: Workgroup has all the same power, it just limits the number of concurrent users and hosts that can be enabled.

[ye gods, it’s the Windows NT license come back!]

Me: Can you integrate this with an in-flight project?
Usman: Very easy to get started; whatever commands/scripts/tools are currently used to launch the build/test/release process, simply import them into ElectricCommander. The “day zero” integration is easy and has immediate benefits: the build is centralized, metrics are collected, reports can be generated and driven, the whole team has access. All this just by pasting a command line into a web form: no need to re-write or change any part of your existing infrastructure.

[ I must try this out … ]

Me: How does this compare with TeamCity and Cruise?
Usman: TeamCity and Cruise are capable workgroup-level CI servers. They’re very prescriptive in how they work, laying out steps to follow via a wizard. We have that wizard, too, for customers wishing to use it. But one click away is the full power of the ElectricCommander interface, which lets you define completely arbitrary procedures, nest and parallelize them, build complex workflows out of simple parts, integrate with a wide variety of ALM tools, and completely manage the entire build/test/deploy process, including the part of that that is CI.

Update: Jetbrains weren’t drawn on that. Yegor from JetBrains had a personal comment to make. (more here) They don’t really seem to view the UI as a key feature. I have a suspicion that they sell to an entirely different market.

UI over XML configuration is a strong thing. Not that much for a super-expert user, but usually a big issue for part-time server maintainer. It takes only several users’ feedback cases to understand how much user’s time is saved by quick access to administration via UI (and users value that!). We do not consider administration UI a “key” TeamCity feature, just “must have” one 🙂
But if one loves to hack XML – here you are: all the TeamCity configuration is stored on disk for administrator to edit. The changes are applied immediately, without server restart.

Response from ThoughtWorks to come.

Tagged , ,

Links for 2010-12-01

Here’s Wednesday’s links from My Twitter Stream.