Deploying perl code

Robert Rothenberg robrwo at gmail.com
Mon Jul 28 19:51:12 BST 2014


On Thu, Jul 24, 2014 at 4:25 PM, David Cantrell <david at cantrell.org.uk>
wrote:

> Our "deployment process" at work isn't so much a well-documented
> dependable repeatable process as a modern dance interpretation of
> lemonparty. It needs taking out and shooting.
>
> ...

>
> I'm looking for tools that will make it easy to go from a bunch of code
> in a release branch on github to an updated bunch of servers, with
> minimal downtime. If it matters we're using Debian.
>


At $previous, we deployed (mainly bespoke) Catalyst apps using the
following toolchain at a small shop:

- Gitolite to manage security for repos.

- Git Flow to manage workflow in git, with a master, devel (staging) branch.

- Mini-cpan repos, which existed in git and were mirrored on multiple
servers.

- The devel servers were basically identical to live servers

- Apps were installed in /opt/$name as self-contained installations (with
the exception of a shared
  Perl, but we had the ability to install an alternative Perl there).  The
installations were git repos
  with tags for each release, so we could roll back if needed.

  Each instance of an app ran under a under a unique username.

  The idea was to build on devel (staging) servers, using Carton, and push
new versions to the repo.

  In reality, we never quite got there. We'd build the app in the user's
$HOME/src module (which was
  the source git repo) and just installed the new version into the live
servers and updated git accordingly.

- Configuration lived in /etc/opt/$name, and was managed by custom Puppet
modules. These also contained
  symlinked web server configs for sites. (If I knew what I know now, I'd
have used a common configuration for
  the webserver and managed the differences in Plack::Middleware, alas.)

  Each app has a custom Puppet module that uses a base one but set up
databases and installed cron jobs as needed.

- Database versioning was handled using DBIC::Migration, which worked well
for vanilla Postgres. It was (maybe
  still is) buggy for PostGIS, so we ended up manually updating the SQL for
deployment, upgrades and
  downgrades in git and just used it for deploying upgrades/downgrades.

  Database upgrades were multi-step:
  1. Add new columns/tables/etc to database with schema that used them;
  2. Release code that uses new columns, tables etc.;
  3. Release code that no longer used;
  4. Update database to delete things no longer used;

Generally this worked out ok. We did most of the steps manually (it was a
small shop with a few apps), but had helper shell scripts to do this. In
theory it could have been automated completely, but we never felt the need.


More information about the london.pm mailing list