Versioning the wrong things is an antipattern of software configuration management. A couple of years ago I wrote a blog post about the evil of using a version control as a filesystem, in response to a team member checking in ~250mb of binary crap into our fragile little Perforce server.
Claudio Bezerra commented, and asked:
I saw that one visitor, Fabrizio Dutra, mentioned that versioning derivative files is a bad practice and I agree. However not everyone at my office agrees. Do you know of books or articles that confirm this assumption?
I don’t have a reference for you Claudio, but I can tell you that it’s just plain wrong. My take on it is that it’s fear driving people to want to version generated artifacts. If you perceive a risk that you may not be able to re-generate something, then it’s tempting to want to version it.
My issue with doing that is that you end up with another risk – it can effect your ability to make changes. Your own project (or downstream projects) can treat those artifacts as a something canonical, and lose all reference to the source files that created them. Also, if you do have trouble re-generating them, then it’s all too easy to fall back on your versioned artifacts. Will you still be using them in 2015?
If you version all the source files accurately, and practice Continuous Integration, you can make sure that you’re always in a position to generate any project artifact. Don’t forget to ensure that some of these critical projects get built from time to time. It’s a useful feature for a Continuous Integration server to be able to ‘tickle’ a project if it hasn’t been built for a month or so: environments change, licenses expire, and software rots.
Photo thanks to KevinPoh