On Pipelines

Reasonable expectations of CI and CD in Brownfield applications

Continuous Integration (CI) and Continuous Delivery (CD) or Deployment (also CD) can be tricky enough on greenfield projects. What should be expected from a brownfield application and what groundwork is necessary?

Pipeline capabilities

Different automation and CI/CD systems have different capabilities, though they all overlap for 99%. Jenkins has Groovy scripts and GitLab has prefabricated docker containers to handle complex tasks that don’t do well in declarative yaml bash.

Brownfield Jenkins

Jenkins can start out as a simple, manual script runner and grow from there. A lot of jenkins deployments started off as just that and became business critical once a manager saw the dashboard view with the status indicators. This easy introduction creates a lot of system administrator familiarity with Jenkins.

It has a plugin ecosystem to allow non-coders to have an easy time configuring. It has a groovy script system for anything beyond the built-in and plugin scope.

On the other hand, GitLab CI uses a yaml file to declare the jobs, which are just lines of bash with a few specific keywords to define when to run a job, cache, artifacts, and git strategy. It’s tied into a code repo so few system administrators stumble into it. Most operations folks who use it do so from an infrastrucute as code (IAC) perspective.

Summary of Jenkins: There is a risk that a brownfield app will have jenkins pipelines that already handle aspects of the build and deployment process in a way that is detached from the development process. The disconnection will lead to future deployments which deviate in some way and fail.

Brownfield with GitLab CI

From the GitLab server, any CI tool can clone the repo and run jobs to test, build, scan, deploy, rollback, etc. Since GitLab’s CI is no additional cost and works really well, these examples will be focused on that.

Runner Configurations

There are two principal approaches for running jobs related to older apps that don’t use containers.

1. Use Containers Anyway

The idea here is that containers can do things with the code, even if the actual deployment isn’t into a set of containers. This is the approach GitLab’s AutoDevOps uses to deploy to Kubernetes, so the pattern is established and supportable.

It may be difficult to get cleared to configure GitLab docker executor on a Linux host, but that work will be immensely rewarding. The ability to have containers opens the door to the security scanners, code quality scanner, and a plethora of other platforms and tools which are likely incompatible with the staging environment.

It also allows parallelization to a much higher degree and at much less cost than shell executors.

2. Use Shell Executor

The fallback plan is to basically do the following:

I need to build this for the staging environment so I:

  • download the artifacts (curl),
  • configure the web service (cp config /www/config)
  • set a version number (echo ${SEMVER} > /www/semver)
  • restore the database (echo schema-and-seed.sql > psql)

Take all of those steps and copy and paste the bash commands into the deploy step in the pipeline. The same process should be done for unit tests and builds. There is a tighter coupling between the runner and the project since the build system has to be fleshed out and referenceable by the gitlab-ci.yml job definition.

3. Use the Custom Executor

The custom executor is like a shell executor where all of the assumptions are taken away and you can do whatever you want. It can lead to a much tighter coupling with projects and ultimately will take a lot more energy to maintain as the projects and systems evolve.