Monthly Archives: May 2008

NAnt vs Ant: locations (NAnt Rant)

NAnt vs Ant: locations (NAnt Rant)(Image taken from Nesster’s Photostream)

I think it was 2002 when Dan North took me under his wing and showed me the location attribute of Ant. That was then. Now, I’m doing a lot of .NET build engineering. And I’m dying for this feature. Here’s an Ant build to demonstrate:

<project default="properties">
  <target name="properties">
    <mkdir dir="build" />
    <property name="value" value="build" />
    <property name="location" location="build" />
    <echo>   here's the one with a value ${value}   here's the one with a location ${location}    </echo>
    <touch file="${value}/value.txt" />
    <touch file="${location}/location.txt" />
    </target>
</project>

Both the properties that are set represent a directory. Each has a relative path. What happens when you run it?

Buildfile: /Users/jsimpson/Documents/workspace/playpen/code/props-build.xml

properties:
[mkdir] Created dir: /Users/jsimpson/Documents/workspace/playpen/code/build
[echo]
[echo] here’s the one with a value build
[echo] here’s the one with a location /Users/jsimpson/Documents/workspace/playpen/code/build
[echo]
[touch] Creating /Users/jsimpson/Documents/workspace/playpen/code/build/value.txt
[touch] Creating /Users/jsimpson/Documents/workspace/playpen/code/build/location.txt

BUILD SUCCESSFUL
Total time: 1 second


Amazing. The property set with the location does the right thing and works out it’s fully qualified path. It also deals with any platform specific path seperator issues and presents you with the appropriate path. You may not think that this matters; the touch command worked for the property that used a value attribute, right? Yes, but only because that task will do the work. If you have a task or external command that doesn’t do the right thing, it’ll break.

Nant doesn’t have this. Here’s the same version, which works in the same way:

<project default="properties">
  <target name="properties">
    <mkdir dir="build" />
    <property name="value" value="build" />
    <echo>   here's the one with a value ${value}  </echo>
    <touch file="${value}/value.txt" />
  </target>
</project>

Giving us:

Buildfile: file:///Users/jsimpson/Documents/workspace/playpen/code/props.build
Target framework: Mono 2.0 Profile
Base Directory: /Users/jsimpson/Documents/workspace/playpen/code.
Target(s) specified: properties

Build sequence for target `properties’ is properties
Complete build sequence is properties

properties:

[echo]
[echo] here’s the one with a value build
[echo]
[touch] Touching file ‘/Users/jsimpson/Documents/workspace/playpen/code/build/value.txt’ with ’05/29/2008 21:04:25′.

BUILD SUCCEEDED

It gives the same result, but only because some tasks will work out that it’s a relative task and compensate. Other things won’t.
The astute reader might guess that I let myself get bitten by this again today. Maybe one day I’ll remember this, but right now I’d cheerfully leave the functions, and switch back to Ant in a heartbeat. Please, someone tell me that I’m wrong and that there’s a patch somewhere.

Tagged ,

Checking out Hudson (can I give up CruiseControl?)

hudson-1

Downloading Hudson via SSH session to my build server

I don’t work for ThoughtWorks anymore. I resigned from the Buildix team. Well, I hadn’t checked into the Buildix svn repository since February 2007 anyhow. Now free to evaluate the alternatives to CruiseControl. I might stick with CruiseControl, I might not. Anyhow, tonight I had a serious look at Hudson for the first time.

Installation on my Linux server was a doddle:

sudo su –
useradd -m -d /var/spool/hudson hudson
su – hudson
wget https://hudson.dev.java.net/files/documents/2402/98285/hudson.war
nohup java -jar hudson.war –httpPort=9090 –ajp13Port=8010 > hudson.log 2>&1 &

I stuck it in /var/spool because things that generate logs should go inside /var on a Unix system. I had to override the HTTP and AJP13 (a backend Apache protocol) because I’m already running Cruise on the default ports that Hudson uses. And it’s running, if unconfigured.

The distribution as a webapp is a stroke of genius. I’ll still need to configure a servlet container for Hudson to run in. Though this might be just my peculiar issue. I don’t run services in my house if they can run on my VPS. So ideally I’d like access control via Apache. The command line above will have to do for now, though.

Hudson by default puts all of it’s working files in a ‘.hudson’ directory in the user’s home directory. I knew this, which is why I made an account with a home directory in /var above. The ‘adduser -m -d /some/path username’ command above creates a user, and it’s home directory in the directory that you specify. This is probably good enough for most cases, though packaging it for a Unix distribution might be annoying. I’ll have to look into that, too.

Next time I’ll have a look at configuring Hudson via the web interface. Does anyone else get slight class guilt from the butler metaphor?

Tagged

Ant Best Practices: Use properties for configurability

Ant Best Practices: Use properties for configurability(Photo taken from Catatronic’s Photostream)

First time? Have a look here.

Okay. This time, it’s about properties. A property represents some fact about your build system: where something is located, or the state of something.

A property can look like this:

<property name=”foo” value=”bar”/>

or if you’re dealing with a path, then it ought to look like this:

<property name=”foo.dir” location=”bar”/>

The point that Eric makes in today’s practice is that once you represent everything that is liable to change with a property, you can pass in new values, and override them, or simply change them and enjoy the DRY-ness of it all.

There’s a couple of things that I want to hammer home here:

  • Properties in Ant are immutable. This is a good thing.
  • You can use property files to declaratively load properties.

It takes some getting used to, the idea that you can never change a value of an Ant property. Why is this a good thing? Because you can use this to cascade down through default values. You can always declare the properties that you most want to use first, and then declare the default ones last. And you know that if you pass a property on the command line with a -D (taken straight from nant, that one), you’ll know it’s been set. Even Nant, which chose not to do consistently immutable properties and is in my opinion poorer for it, treats properties from the command line as immutable.

So repeat. Properties are immutable. And that’s fine.

Now, onto the property files. There’s always some variation in IT projects. That instance of your service needs to talk to a different database. That developer needs to build with a different path because he’s got the worst computer in the room and they gave him an extra disk as a consolation prize.

(it’s worth dropping that computer down a flight of stairs if it affords you more consistent developer builds – not that I’d ever advocate destruction of company property)

Property files allow you to address the chaos by overriding the defaults. No don’t go and make the names of the property files too odd. Base them on a property that you can easily get, like the hostname of the machine or the name of the user. By using that as a key, you can load the property file for the correct user in one line of Ant file:

<!– override the defaults –>
<property file="${user.name}.properties"/>

In the example above, user.name is a property given to you by the Java Runtime Environment, so you can guarantee that it’s going to be there. You can place this code in the body of the Ant file so that it’s available to any target. Wrapping this in a target is a recipe for disaster as that target becomes a dependency for ever other target in the build system. I had nightmares about a target called resolve.properties target at one project that I did. They also had the worst coffee in West London, if not the world.

Tagged

A real Build Refactoring, in the wild

A real Build Refactoring, in the wild

(image taken from A Princesses photostream)

As a build manager I have often looked on at my developer peers with a little envy. It’s a niche position, which might explain why tools seem to lag behind sometimes.. There’s plenty of editors out there that one can edit build files with; some you’d even want to use. But refactoring is something I miss. And that’s not just because I wrote an article on refactoring build files: it’s a genuinely useful technique that would make me more effective.

Anyhow, I feel a little better that there’s one authentic build refactoring available in the EAP version of ReSharper 4: Introduce variable. On Friday I did actually introduce some variables (well, properties) in a Nant build file that I was looking at. Resharper did tend to throw exceptions, but I suppressed them and it soldiered on. I had a look at the latest IDEA and Eclipse releases yesterday and could find no build refactorings. Still, it’s a start.

Deploying: Why artifacts are your friend

Deploying: Why artifacts are your friend

(image taken from Nancy’s photostream)

You’re almost there. There’s a single character in a single file that you need to change and you think the deployment is good to go. “There’s no need to do a whole CI build and bother the QA’s for this one”, you think. “I’ll just make the change against this tag, and merge it down later this afternoon.” Oops. There you go. Provoking the Operations Manager again. You just made a decision on their behalf, and they probably won’t like it.

The Operations Manager’s job is about keeping services available, and about managing risk. They tend to mitigate risk by insisting that all code releases have passed whatever QA process your organisation has.

When your users turn on the telly for Neighbours, the operations team will be deploying a release candidate. Allowing deployments direct from your VCS (even if you compile it), is opening the the door to the possibility (no matter now slight) that the release candidate might have a few “enhancements” to it, courtesy of someone who isn’t aware of the QA process.

So this is why people have a job that involves taking code from a Version Control System like Subversion, checking out a tag that a developer emailed them, optionally building that code, and then deploying it to a QA or production environment. You might think that’s inefficient. But when it comes to keeping a major web property or financial system online, there’s different calculations going on: and they are about risk.

Every time software is deployed, there’s always the risk that no matter how diligent the development team has been, there’s errors. Which can lead to outages, and cause at best reputational issues. So having someone to be the interface between the development team and the operations team and pull the right code across comes pretty cheaply.

Tools like Capistrano that work off of Subversion aren’t yet a good fit for some the organisations because of this effect. When those organisations see fit to use Java and C# code anyway, I think there’s virtue in taking the code that is built via Continuous Integration and using that as the clean, deployable unit. That’s a repeatable process, and it doesn’t involve humans typing stuff in, or clicking stuff, which always seems error prone to me. Mind you, that’s could just be my typing.

Unclean. (does your CI server have an IDE installed on it?)

Unclean. (does your CI server have an IDE installed on it?)

(image taken from SubFlux’s photostream)

Your build shouldn’t depend on an IDE. I’ve been saying that for a long time. It doesn’t matter that all the developers use the same IDE on your project. At least in the Java world that I have inhabited for most of the past 8 years, you absolutely should not need an IDE installed on your build server.

Yesterday I installed an IDE on the build server.

Casey Charlton and I both agree that in an ideal world, there’s no connection between the build and the IDE. I’ve been trying to find a reference to the “thou shalt not install an IDE on the damn build server” rule. I’ve found a pretty authoritative quote from Paul Duvall’s book:

You should avoid coupling your build scripts with an IDE. An IDE may be dependent on a build script, but a build script shouldn’t be dependent on your IDE. …. Creating a seperate build script is important for two reasons:

1. Each developer may be using a different IDE, and it can be difficult to to account for configuration differences in each IDE.

2. A CI server must execute an automated build without human intervention. Therefore, the same automated build script used by developers can and should be used by the CI server…

As usual, Paul is bang on. But my issue at the moment is Visual Studio. Not that I can’t write code in it (I haven’t really tried), but the fact that it’s actually more than an IDE in the traditional sense. It’s also the container for the testing framework. It’s almost the entire stack. It comes with Crystal Reports (which I’m happy to say I didn’t install), and other stacks of middleware. Which you need to build your app. Ever tried to build a VSTO app? You need Visual Studio.

In theory, you don’t need it, because the build tool for Microsoft Products (MSBuild) comes with the .NET framework. But it seems like in practice, you do.

So I installed it. And the build works, without me fiddling with the GAC. I can live with that. What works for you?

Ant Best Practices: Define Proper Target Dependencies

Ant Best Practices: Define Proper Target Dependencies

(Image taken from Nick Sieger’s Photostream)

Wondering what this post is about? Have a look here.

Last time I wrote, it was about reusing paths. Tonight, it’s about the dependency graph. My [N]Antcall is evil post goes into some detail about dependency graphs. Let’s just agree here that Ant targets tend to accumulate dependencies. The point that Eric makes is that the dependency graph (otherwise known as the dependency tree) grows over time as you add targets. Being a thing that accumulates over time, the graph can get crufty. That target that was just a short term thing while Bob worked on a refactoring exercise (oh, the irony) is now central to your build. So from time to time you need to clean the dependencies out, like that burned cheese that melts off of pizzas and sticks to the bottom of the oven.

Some projects seem to run into trouble and throw dependencies out completely. Umm, good luck with that. Eric suggests a half-way house approach: a few well known targets that contain dependencies on lesser known but functional targets for the more experienced to use at their own risk. I can go along with that.

My last thought is this: your default build should never depend on a clean target. Leave it to the compiler to have a go at working out what it should do. If you use Continuous Integration, you can make a little target that depends on clean just for that case. Your fellow developers will thank you.

Tagged ,

Git – coming to a Windows computer near you?

Mono founder Miguel de Icaza just twittered about a Google Summer of Code project called Git# – implemented in C#, with no platform dependencies. Git is a powerful Distributed Version Control system that came from Linus Torvalds. While you can convince it to run on Windows, it has dependencies on the Unix toolchain. This project could change all that.

Ant Best Practices: Define and Reuse Paths

Ant Best Practices: Define and Reuse Paths(image taken from justpic’s photostream)

Last time, we discussed dependencies. Today’s installment of Ant Best Practices is purportedly about resusing filesets by reference ID, but it’s really another way to avoid duplication. The title of Eric’s original article is ‘Define and Reuse Paths’. The same advice works for paths, filesets, and filtersets. Here’s an example:

<project name="OnceAndOnlyOnce">
  <fileset id="output_files" dir="${build.dir}">
  <include name="duplicato-*.jar" />
  </fileset>
<target name="copy_stuff">
    <copy todir="${special.shiny.dir}">
      <fileset refid="output_files" />
    </copy>
  </target>
</project>

You can see that any other use of the fileset will be able to refer to it by ID, rather than explicitly declare it again. One of the scariest builds that I ever saw was 4,000 lines of XML in a single file. One of my erstwhile colleagues spent a morning going through it and replacing filterset declarations with references to one filterset. It’s such a useful feature to have in a build tool, and a fine piece of advice to give to someone who just started out writing build scripts.

The other point from the original article is that you need to use this technique alongside some modular approach to handling paths. You just can’t really get away with having a path named ‘standard.jars.classpath.without.selenium‘ (and yes, that’s almost directly lifted from the build of a large Java project). You’re much better off trying to split things like classpaths into categories like build-time and runtime. That’s something I touched on in my article for the ThoughtWorks Anthology. I’m thinking of expanding on it in a future post. You like that idea? Let me know.

Tagged