APIs are eating your consulting business

Last year we engaged a firm to do a non-core (and frankly, annoying) IT project.  We reasoned that their experience in their domain would mean that they’d have a better chance of success.

It turned out that their business was built around software from a third party supplier, and a thick veneer of bullshit.  Of course, there were issues.  What floored me was that these consultants were helpless, like a 10 year old flying an aircraft.  They didn’t have any expertise, or a plan B.

We ended up escalating to their supplier, and in the end I fixed the issue with some bash scripts.  This wasn’t a good outcome; I didn’t want to be the hero.  I would have been happier to be told that under the bullshit and the organisational silos was a root cause that had an ETA for a fix.  A week after I declared hollow victory, the root cause was fixed.  Bah.

Fast-forward a few months, and we need to do some similar projects.  Naturally, I won’t be going to these consultants who are selling billable time.  I’ve been offered someone (from another organisation) who can do “some of the boring work”, but that misses the point: the rise and rise of APIs (and dynamic languages) mean that there’s a decreasing need to throw people at our problems.

If I’m going to accept any help, it’s got to be someone who can look at the problem domain, choose a tool and then write appropriate scripts (if needed) to reliably solve my problem.  If your business is built on humans racking up billable hours to do tasks that can be automated, all I can say is this:

61777323

Upcoming conferences that I won’t be at, April 2015

I wanted to share the details of a few conferences.  I won’t be there, because I enjoy a life that’s mostly free of jetlag and commuting.

  • The Jenkins peeps are doing a world tour of the US East Coast, Europe, Israel, and the US West Coast.  That’s over summer if you’re in the Northern Hemisphere.  They’ll be doing a CD Summit at each of those conferences.
  • CITCON Europe is in Helsinki, in September.
  • CITCON North America is in October this.  In Ann Arbor, Michigan.
  • We hope to get CITCON Australia New Zealand organised for 2016.  We need lots more conferences in New Zealand; we have great coffee, a delightful semi-tropical climate and Raspberry Lamingtons.
  • At Neo, we’re doing GraphConnect Europe (in London) in May.

The 80/20 rule in Cloud Development

Guest Post by Brian Whipple, Marketing & Communications Manager at Cycligent.com.

There is a commonly known rule in business called the 80/20 Rule. Introduced as an economics rule to explore distribution of wealth, the 80/20 Rule has become a common business management principle, defined by this common “rule of thumb”:

80% of effects come from 20% of causes and 80% of results come from 20% of effort.

I often use the 80/20 rule to shed light on my own work habits, or to analyze cause and effect on results that I may have seen.

Recently, I read an older blog on the 80/20 rule and how it applies to software development. The author described how in software development, 80% of performance improvements are found by optimizing 20% of the code. Of course, this is not an exact science, but more of an estimation in order to self-evaluate where a developer spends time.

Through experience I have realized the fact that building a highly scalable, highly resilient, and highly available web application is extremely hard. Learning the ins and outs of AWS is a no small feat for even the most experienced of developers, let alone someone new to coding for the cloud.

As a result, we came up with an 80/20 rule of our own when discussing cloud development:

Only 20% of the features offered by AWS are utilized. The other 80% are extremely complex and are usually not implemented/maintained.

Due to the fact that building enterprise web apps is not always easy, the developers at our company, Cycligent, have found ways to optimize work for the best results. Here are a couple of ways that we have put the 80/20 rule into action:

We found that heavy client-side applications often suited best the applications that we were developing. When they were combined with Node.js and MongoDB on the backend, we reaped a lot of benefits. The line between a front-end developer and a back-end developer became much more blurred, making our developers more versatile. Impedance mismatch issues stemming from the client and server speaking different languages went away. This increased developer productivity due to less context switching between the frontend and backend code. In short, a small amount of time up front choosing good tools reaped a lot of benefits later.

We found that despite the promises of the cloud, it’s not always easy. Spending some time up front to abstract away as many of the complexities as possible make our developers much more productive, happier, and it lowered our defect rate. To give a specific example, when moving to the cloud and dealing with scaling our application and distributing the load across many servers, developers often got tripped up by when and how to properly communicate over a message bus. To alleviate that, we spent some time making it so the message routing was handled automatically behind the scenes. Developers only had to focus on the actual backend logic, instead of worrying about when and where the messages had to be exchanged, or that there was even a message bus at all.

While it is a struggle to build and maintain the highly available and distributed web applications, in our opinion the benefits of utilizing AWS for cloud development far outweigh the bad. We have implemented a few tools and processes to produce results in our company, and there are probably a lot more out there. We believe that in a lot of scenarios, finding tools and processes that line up with your company’s development process could lead to 80% of positive results.

Shameless Self Promotion

I am the Marketing & Communications manager with Cycligent. I contacted Julian Simpson because I respected the community of developers that have developed around influential blogs such as www.build-doctor.com and want to participate in contributing to the community. Also, I wanted share with you our new cloud platform that simplifies coding for the cloud. Go to www.Cycligent.com to learn more, and to sign up for a 30 day free trial.

Dashing through the glow [of displays]

I’ve been making dashboards for some stats we track at work.  I don’t want to trust another organisation with our data; too many dragons.

So that leaves an OSS framework.  I’m using Dashing right now to hit API’s for Google Analytics, Pingdom, etc.  I  think Dashing’s widgets will live a long time, even if the server side becomes unfashionable.  Nice to see recent commits on the project.

One issue was that I found the documentation on the widget types a little vague.   So I made myself a demo of all the widgets.  Here’s the source.

linux.conf.au in Auckland

Happy New Year.

I’m speaking about Graphs and Neo4j at linux.conf.au next Friday.  Don’t think I’m the star attraction though, I think that’s Linus.

Toxic Repo

If you can’t dispose of toxic waste (say, by burning it or launching it into space using surplus ICBM’s), then you probably need to contain it: stop innocents from stumbling across it, or stop the malicious from using it for malicious projects.

The same issues apply to your source tree.  If you have Amazon Web Services credentials checked into a project on GitHub, that’s a toxic repo.  You’ll want to contain to protect people from intentionally or unintentionally damaging the resources that can be accessed from the credentials.

One of the problems of having your own toxic waste dump, is that it’s very easy to add more waste to the pile.  So that repo with a private key checked in might easily get an AWS credential, and a couple of months later, a raw database password.

Another is that sometimes, you might give the wrong people access.

What can you do about it?

  • Amazon’s IAM is incredibly useful at containing the material inside Amazon itself.  Use things like that.
  • Be prepared to burn credentials if they are compromised.
  • Rotating any toxic credentials stored in a repo also helps.

Cleaning up some toxic waste yesterday was pretty good.  That’s one less dirty secret.

Upcoming Auckland Neo4j Events

So, I return to New Zealand. Spend most of a year hiding in a cabin and then fail to organise any events. And now they’ve all come at once:

  • James Rowlands is doing a talk on Neo4j for Python Devs at the Auckland Python Meetup. Tomorrow, February 19. James organised this, I’m appearing for moral support.
  • Neo Technology is a sponsor for CITCON Auckland 2014, and I’ll be giving away a few paper copies of the Graph Databases book.
  • We’re kicking off the Graph Database Auckland meetup on March 3.
  • The second Graph Database Auckland meetup features our Chief Scientist Jim Webber, on April 3. He’ll be showing off the awesome new features of Neo4j 2.0.
  • Jim is keynoting Codemania the next day. You will laugh. I guarantee it.

Happy 2014

2013 was busy.  It’s hard to work remotely with people who are literally on the other side of the planet.  Remote helps explain why: there’s no overlap, apart from what overlap I make myself.

To make things more busy, we ended up buying a new Build Doctor HQ and moving from the country to the suburbs.  Moving from a cabin back to the spare room has it’s comforts.  Like plumbing.  There’s a lot of work to do on the HQ, but it’s nice to have a new long term project.

This blog is a long term project, too.  The last couple of years have seen it slide as I worked on other things and moved country.  It’s no longer sponsored, and pursuing sponsorship doesn’t work when I haven’t been posting.  I almost ported to Ghost, but decided that I’d take the simplest option of moving it to wordpress.com and letting the content speak for itself.

A benefit of the move is saying goodbye to the  www in the blog URL, which fixes a mistake made in 2007.  That and never having to do another plugin update.

Now I just need to find something to write about.

Happy 2014.  Have a happy and productive year, wherever you are.

Tagged

News, August 12

  • Resharper 8 is out, making Visual Studio usable [link]
  • Also YouTrack 5, I’ve never had the pleasure of that particular issue manager [link]
  • I’d love to go to FutureStack, New Relic’s user conference [link]
  • Heroku have announced a lab of their pipeline support.  At Neo we have several apps deployed on Heroku, so I road tested it this morning.  Does what it says on the tin, and shows what commit went where.  There’s a challenge for some of the addon providers who offer a similar service.  [link]

The Benefits of Fail-Safe Application Deployments

(A guest post by Dan Gordon of Electric Cloud)

Enterprises are building, testing, and deploying software faster and more frequently now than at any point in the past. Faced with unprecedented demands, many of these software development organizations are realizing their rollout processes are haphazard, at best. These improvised procedures lead directly to heightened numbers of costly, time-consuming errors that degrade their business agility. Production deployments remain the last mile hurdle in the agile world due to the disconnect between the Dev and Ops teams.

Fortunately, there is a well-regarded, proven collection of best practices and supporting technologies that can go a long way towards making the software deployment process more streamlined, safer and more robust. These fail-safe software deployment techniques deliver an impressive array of business and technological advantages.

  • Design for manufacturability – Transform your software design and implementation procedures into a more mechanized, repeatable series of steps. This help make test results from earlier phases in the delivery cycle relevant for later stages, and lets you perform consistent test in many scenarios over time.
  • Leverage the power of automaton for your software delivery process – Eliminate the unrefined, often manual deployment processes that plaque so many software development organizations. Comprehensive automation technology can have a meaningful impact on productivity and accuracy, just as it has for many other sophisticated businesses practices.
  • Design with failure in mind – The bottom line is failures will occur despite your best efforts, so prepare for inevitable breakdowns. Determine what is an acceptable failure, and by acceptable, we mean a failure that doesn’t need to halt the entire deployment process. Define success and failure thresholds by tier, and allow for partial deployments to complete successfully.
  • Test early and test often – Build a consistent deployment model and test it throughout the entire software deployment lifecycle. Your software deployment platform should reside at the heart of your testing efforts. Taking this approach uncovers any issues well before a crisis develops and lets you evolve the process so your production deployments are smooth and fail-safe.
  • Zero in on defects efficiently – Identifying and correction defects tends to be laborious and inadequate, but fortunately, specialized automation solutions are great for isolating and resolving these problems. This makes troubleshooting complex deployments much more efficient, and results in faster time-to-market.

These techniques can make your software deployment experience faster, smoother and more reliable. By transforming complex software delivery processes into fail-safe production deployments, you will benefit from increased DevOps collaboration, reduced cost and a higher quality of delivered software.

Dan Gordon is a Product Manager at Electric Cloud. Dan brings over 20 years of experience in the IT software industry. At Electric Cloud, Dan is responsible for product strategy, product marketing, tactical alignment and execution with product development, sales and pre-sales enablement and support. Previously, Dan was a product manager and systems architect for the enterprise IT automation software business within HP Software. Dan has also held managing and systems engineering roles at Opsware and Sun Microsystems. Dan holds a bachelor of science in information and computer science from the University of California, Irvine. 

Follow

Get every new post delivered to your Inbox.

Join 3,504 other followers