One of GitLab’s strengths is creating productivity-boosting constraints that positively impact system architecture and DevOps cycles. This is very clear in the Kubernetes-powered workflow, but discovering this can be impossible if an org is already locked into an ops-focused Kubernetes deployment.
Also, Kubernetes is entirely optional so don’t force it before the operations team is ready. Avoid ClickOps, even if that means delaying orchestration.
Note: this article focuses on the ways in which GitLab supports development efforts leveraging Kubernetes or targeting Kubernetes. Running GitLab itself is up to whatever deployment type the operations team is comfortable doing.
Tools all want attention
The most beautiful part of GitLab’s workflow is that it is empowered by Kubernetes without requiring developers to interact with Kubernetes. This is due to GitLab’s presence throughout the DevOps lifecycle without requiring a bunch of other tools.
When you have a DevOps toolchain with several tools, each one wants to be the center of the user’s world. This is just not a reasonable expectation.
- Task management tools try to make the management actions into real actions, but they’re not
- Code quality and static security scans try to show charts to add context, but we just want to know if it’s getting better or worse
- Dependencies and package notes for CVEs and toxic licenses should be highlighted but there’s isn’t a fancy graphical representation that adds value
- Build and deployment tools shouldn’t require handholding, just deploy it when triggered and rollback if failed
- Runtime analysis, performance tests, etc show reports that need a human to add context
Basically, all of these things should inform decision points and provide feedback to the people doing the work and making decisions.
This dynamic is even more important when it comes to infrastructure abstractions. One of the biggest accelerators in containerized DevOps practices is the container abstraction removing a huge burden from the ops team entirely. Now that we have automation to keep these containers running and connected to storage and each other, why step back and drag developer attention down the stack.
Keep developer attention on the application
We don’t expect artists to do their own plumbing and pour their studio foundations. We have plumbers and construction workers to specialize in those things. Artists just sit in their studio expecting the structure to not fall over and the toilet will flush as desired.
An artist who shows up at his studio in the morning and inspects the foundation structure, fills in some cracks, and then turns on the water and checks all the faucets and drains is going to get a lot less art done.
The same is true for Developers who are embarking in creative tasks. Tasks which don’t benefit from down-stack distractions.
For day 2 operations this is even more important
Continuing the analogy a bit more, for an already-underway sculptural installation, having to change out the foundation or plumbing is dangerous and problematic. The physical movement can damage the piece. Sewage flooding or plumbing malfunctions can ruin the piece.
In the same way, an application already running in a container can be moved into Kubernetes if it’s warranted, but Kubernetes is entirely optional.
Optimal workflow for developers and Kubernetes
The workflow for a developer that deploys to Kubernetes should look like this:
- Developer writes software
- Everything else is automatic
- Developer changes automation to fit the evolving needs
Step zero in GitLab is to go to the Kubernetes Integration panel and provision yourself a cluster. The operation requires putting in credentials and then clicking “install” on some
gitlab-managed-apps to handle the stuff that OpenShift does manually.
GitLab doesn’t quite handle 2 perfectly, yet, but it’s getting close. AutoDevOps currently enables containerization for many types of applications without adjustments. It offers tiers of flexibility that increment complexity as needed.
Other tools, like OpenShift, have a complexity-hiding approach that limits ultimate flexibility and forces workflows and application architectures into specific patterns.
Is this just about OpenShift?
Mostly it is. There are other wrapped Kubernetes deployments that add Dashboards and other ClickOps mechanisms to
make Kubernetes easier.
The big concern to address is the way problems are solved impacts the workflow and how effective the teams following those workflows are.
Shining light on the situation and comparing apples to apples should help fix a lot of these misunderstandings. The comparison is difficult though, so I will focus on the areas that OpenShift (and others) solve a problem that is also solved by GitLab Managed Apps or AutoDevOps and how these solutions optimize different things and lead to different outcomes.
Solutions and their optimization
|Problem||OpenShift Solution||GitLab Solution|
|IT has a cloud mandate||From the OpenShift for Business Leaders marketing: “By advocating for OpenShift, you’re helping operations teams to focus on what they need to manage, while developers can use it to deploy code the way they want to work” Optimizes for lock-in||This is only included because it’s in OpenShift marketing. It actually isn’t a solution to any problem.|
|Modernizing current applications is proving difficult||From the OpenShift for Business Leaders marketing: “Why is OpenShift the right platform for your digital teams to build new solutions with?” Optimizes for new applications created from OpenShift templates||GitLab provides flexibility to take modest steps toward modernization. Adding automation to a CI pipeline, containerizing, or strangling legacy apps, whichever is the appropriate next step. Optimizes for business needs|
|Automating creation of containers||The OpenShift for Developers marketing page shows a workflow where all the development happens locally and once everything is perfect, OpenShift’s pipeline will build the container and deploy it. Optimizes for deployment speed||In GitLab, we want the developers to have production-like deployments the moment they create a feature branch. Review Apps enable this using Kubernetes namespaces. Optimizes for production-like feedback to developers|
|Applications don’t work with the orchestrator unless they use a template||The OpenShift for Developers page has a “CLI Junkies” section which talks about how easy it is to create a NodeJS app using a short command. Optimizes for creating new applications rapidly from a template||GitLab provides some templates for various languages and AutoDevOps for repos that make sense to the Buildpack system, but GitLab really shines in incrementally building out a pipeline using only Dockerfiles and Bash scripts. Optimizes for Day 2 operations|
|Deploying and managing a container platform is difficult||From the OpenShift for IT Operations page: “OpenShift is designed to make deploying and managing the container platform easier” Optimizes for teams lacking Kubernetes experience It should also be a tracked risk if a team is running production applications on a platform they can’t operate and reason about at a low level||GitLab solves this with the Kubernetes panel provisioning a GCP or EKS cluster and managing it automatically. For on-prem solutions, Rancher, D2IQ, Kubespray, Kops, Anthos, or a dozen other options exist that also solve this problem without the OpenShift shackles. Optimizes for cloud-provisioned clusters|
|Developers always need new environments||OpenShift IT Operations marketing page says that developers are empowered to self-service by logging into OpenShift and clicking on options to create and use the environments they may need Optimizes for developer self-service via clicking||GitLab’s approach is to grant the cluster administration role to GitLab and allow the developers to benefit from it without having to login and click on menu options Optimizes for avoiding coordination and clicking|
This pattern is repeated throughout OpenShift and many other products
An important set of evaluation criteria when doing tool selection should be user experience and what aspects are optimized.
- Is the optimized procedure on your organization’s critical path?
- Creating hundreds of NodeJS apps per day versus deploying hundreds of review apps
- Should it even be a step that requires a specific action?
- Clicking on web dashboard to request a static environment versus provisioning dynamic environments as needed and destroying them when the feature merges
- Is it something that positively impacts high-value applications?
- CLI to optimize making new templated apps versus gradually automating and improving legacy applications
But now that we have OpenShift, we can just use it since it’s Kubernetes
You’ll run into problems and I expect OpenShift and Kubernetes to diverge now that there are multiple clouds running managed OpenShift deployments.
There is a free OpenShift called OKD that can be deployed and tested for those who are curious. Just make sure the options are presented to a developer community that will be impacted by the decision.
How do we decide?
I may have to make a post about just this. It’s not particularly easy to evaluate things like this because GitLab’s benefits are tied to it being used for other DevOps aspects.
It’s free to setup GitLab Core and use the Kubernetes Integration and with all of the recent features that were made free, the workflow is pretty nice. For the security scanning and cross-project coordination features, GitLab Ultimate would be necessary. You can always start with a free gold trial on gitlab.com.
You can hook that up to a GCP or EKS instance and push a pilot app and start testing.
Be sure to include the Developers in the evaluation process. They are the folks who know whether the workflow impacts will be positive or neutral/negative.