Last week, I wrote “Dick Jokes and DevOps” after someone was called out for making a dick joke on a devops mailing list. Although the reaction was largely positive, some valid criticisms highlighted flaws in the piece that I hope to correct in revisiting the topic.
It happened some time during primary school, I’m not sure exactly
when. I was old enough to be given homework, but not so old that
failing to do it had real consequences. I was already a bit of a
goody two-shoes, and getting good marks was pretty important to me.
So, I’d been given some homework, and I got into trouble for not doing
it. The truth was that I had done it, but I didn’t think I’d done it
very well - so I lied, and said I hadn’t done it at all. I figured it
was better to look like I hadn’t even tried, than to have given it a
good shot and come up short.
I shared that anecdote with a few people over drinks at DevOpsDays, the outward expression of a train of thought concerning my failure to blog. It even occurred to me as a neat framing device for a post reflecting on that failure, considered as part of an ongoing struggle with perfectionism. Neat, but disingenuous.
Update: This post has flaws that are addressed in a follow-up article.
So, someone made a dick joke on a devops mailing list.1
It was pretty benign, as such things go – an allusion to penis enlargement in the context of a discussion about managing spam. I’d have made the same joke in a group of friends, though perhaps not in front of my mother. Certainly not in front of a nun, or on a global mailing list largely populated by people who I do not know, and who do not know me.
A couple of years ago, Stephen Nelson-Smith wrote a little book called Test-Driven Infrastructure with Chef. Somewhat ahead of its time, it gave a brief introduction to the idea of outside-in, test-driven development, in the context of infrastructure code. Since then, the rest of the world has started to catch up, and a variety of tools and community practices have grown up around the idea.
Last month, the greatly revised and expanded second edition of the book was released. I mean to write a full review when I’ve spent some more time with the book, but here’s the short version: if you use Chef, buy it.
That having been said, nothing in life is perfect, and that’s true here. The book optimistically assumes that Opscode (or whoever) would have managed to make a 1.0 release of Test-Kitchen by now. They haven’t, so you’ll currently have issues following some of the instructions when the recommended toolchain is introduced in Chapter 7.
Last week saw the second London DevOpsDays conference for 2013. BMC sponsored video of the plenary sessions, all of which are now on Vimeo. I think the videos look better than the sessions did in person (the lighting in the venue wasn’t great). Read on for my thoughts…
Yesterday, someone joined the #mcollective IRC channel to ask how to connect MCollective to Amazon’s Simple Queue Service. I explored that idea earlier this year, and decided not to pursue it, but it seems I didn’t get around to sharing the results of the experiment. Until now.
I was setting up a small infrastructure in EC2, but I wanted to have MCollective available. I couldn’t justify the cost of the extra capacity required to run ActiveMQ – I didn’t want the extra hassle, either. On the face of it, SQS seemed like a good place to start my investigation.
The recent release of an x509 security provider for MCollective has motivated me to do some more work on the mcollective cookbook. Although it worked well enough to play around with, the configuration was not especially flexible and the cookbook did not lend itself to wrapping.
MCollective 1.2 was the current release when I originally wrote the cookbook, and the configuration reflected that. New features introduced in later releases of MCollective weren’t enabled – it worked, but not at its best. The configuration defaults now reflect MCollective 2.2.
Configuration is almost entirely parameterised. If you have need to override the cookbook’s templates to implement your configuration, I’d like to improve this further.
The MCollective “identity” is now configurable via an attribute, but
continues to default to
node['fqdn']. When your node name and your
identity match, you can use the
chef-server discovery plugin
– so this default may change in the next major release.
Last year, Venda released a project to create and manage a simple x509 PKI using Chef and Chris Andrews introduced it with his blog post, “Deploying a PKI With Chef”.
A few people tried out it out after the initial release (and submitted patches or bug reports – thankyou!), and it has since been renamed to become the x509 cookbook, which you can find on the community site or on github.
I’ve found it useful of late, so let’s take another look.
What’s The Problem?
You’ve decided to SSL-enable one of your internal services, and that means you need an x509 certificate. The cheapest and easiest option is to generate a self-signed certificate, but this option is not without drawbacks.
When you connect to a service using a self-signed certificate, you can be confident that your communication is encrypted, but you can’t be sure who you’re communicating with. You are protected from attackers “sniffing” data from an insecure network, but not from attackers creating a fake service in front of the one you expect to connect to (a man-in-the-middle attack).
It’s also annoying to users, as most software will (rightly!) warn you that self-signed certificates are not to be trusted.
A better option is to run an internal Certificate Authority, and use that to sign the certificates for your SSL-enabled services. You can import your CA’s certificate into your browser (or OS), which will then trust services using certificates that it has signed.
It’s not hard to make your own CA, but getting a signed certificate for your service necessarily involves a number of steps:
- On the host, generate a secret key and a certificate signing request (CSR)
- Get the CSR to the internal CA
- Create a signed certificate using the CSR and the internal CA
- Get the signed certificate to the host
- Install the signed certificate
Venda wanted to automate this process and the result is the
cookbook, and the
I recently needed to run 32-bit Perl 5.8 on a 64-bit Centos 6 system. Initial research suggested that perlbrew would be the easiest way of achieving this, but I wasn’t able to find a walkthrough.
Here’s what worked for me…
1. Install perlbrew
You’ll need to install perlbrew from the CPAN, and it has a load of dependencies. The wonderful App::cpanminus makes this experience as painless as possible, so I installed it before moving onto perlbrew itself.
1 2 3
1. Initialise perlbrew
Next, get perlbrew ready for use. Pay attention to the output of the
init step – it will direct you to make a change to your shell configuration.
2. Install 32-bit Libraries
Installing these two packages was enough to build a 32-bit perl core. If you’re building additional XS modules against the 32-bit perl, they may require other 32-bit libraries to be installed.
3. Build A Perl
1 2 3 4 5 6 7 8 9
That’s all there is to it, though the result isn’t quite perfect. While the above invocation builds a 32-bit perl, it doesn’t override the system’s archname – so the resulting @INC looks like this:
1 2 3 4 5 6
For my purposes, this is simply an aesthetic issue – the x86_64-linux directories contain 32-bit shared objects – and I chose not to spend any more time perfecting it. If you happen to know which option(s) I’m missing, please leave a comment below.
I recently attempted to use Chef to configure several VMs with software I wanted to play with. My goal only went as far as provisioning a minimally useful instance of each application – it didn’t need to be production-ready, but it did need to start.
The applications were:
- Jenkins, a CI tool written in Java,
- Graphite, graphing tool written in Python, and,
- Sensu, a monitoring tool written in Ruby.
Additionally, Sensu requires:
None of the dependencies are particular esoteric in a modern environment, though the variety seen here underscores the need for good configuration management. The guest OS was Ubuntu 12.04, as many community-contributed Chef cookbooks prefer or require Ubuntu and I wanted things to proceed as smoothly as possible.