After we released Ushahidi v3. We turned around and release a SaaS version in under 2 months. Our server builds and deployments were all handle with Ansible - it was quick to get up to speed, didn't require perfectly reusable code but still gave us the freedom to spin up 6 extra servers on launch day with little sleep, with no errors. Huge props to Zack who jumped in an built the entire first version of this solo.
Come January we had a fairly typical problem though: because we were deploying from my laptop or Zack's, all the knowledge of how we deployed and a few of the fragile aspects of the deploy were held between the 2 of us. And our throughput was slow: bugs might be fixed in 1-2 days but it wasn't till a week later we would deploy them.
We started looking for solutions for automated runs of ansible - however most solutions either used Ansible Tower - expensive - or Jenkins/some other task runner CI. Not being super keen on either of those. One of our team suggested trying out Codeship's new docker build environment.
So now we're running ansible-playbooks from within docker.
A few steps to get there
1. Deployment only playbooks - with no provisioning. We don't want to redo all the server set up on every deploy, just update the code and restart services.
2. Run ansible inside docker.
Advantages:
1. We can run exactly the same thing locally in docker
2. We can run tests, build steps etc in docker too
3. We didn't have to do a whole-hog swap to deploying docker in production
4. Most of the set up is provider-agnostic. Very little is tied directly to codeship. They leverage docker compose so almost all of the service config can be used elsewhere.
How it works
- Example docker file
- Example services file
- Some playbook
Refinements:
- Just share 1 repo. Because duplicating inventories sucks.
- How do we share this for client builds too?
See also:
Links to platform and client test runners