After attending the ATARC IRS Container Forum late last year, I was inspired to share some of my perspective. Between the folks who spoke at that forum and my own interactions with federal agencies, it was clear that waterfall was leaking into DevOps.
I submitted a talk to the March 10th, 2020, DevOps Summit event and they accepted it as a lightning talk. LinkedIn Post.
The waterfall plan spends a lot of time building the “best practice pipeline"
and then brings it to a portfolio of legacy and otherwise incompatible applications. If applications can be onboarded, they end up with a huge amount of complexity in a properties file
or reconfiguring the application to meet the pipeline’s expectations. There are plenty of reasons this fails, but a few are:
. The DevOps team isn’t empowered to make changes to applications
. There is no incentive to rearchitect the legacy applications at all
. The pipeline
build job is a hundred lines of logic figure out the project when “bundle install” would suffice
. The pipeline
is complex and any failure requires participation from the DevOps team and developers as well as security or operations teams depending on what is failing
. Changes to the application require coordinating changes to the pipeline
. New architectures and techniques can’t be used until the pipeline
is ready
. The pipeline
includes compliance and security steps so developers can’t modify it
. The pipeline
assumes all projects have the same maturity, experimental works are treated as though they’re mission critical
So, my recommendations are:
- Treat DevOps pipelines like a product features and empower the developers to create, use, and modify them as needed. Separate any compliance steps into a pipeline in the release phase. Run any fast security scans on the creative pipeline to provide informational feedback to the developers and product owners, but do not enforce anything about them.
- The DevOps team will be free to focus on new capabilities and supporting different types of applications. No more chasing down pipeline failures or perpetually onboarding applications as they evolve. No ownership of
the pipeline
. All outputs are sprint-size job definitions that can be shared and used like Lego bricks to build advanced and effective pipelines. - Allow experimental, new applications to be treated as such and as they mature, add the change controls and configuration management necessary to ensure stability.
This is quite a bit more content-rich than the short talk I delivered on the subject, so I really only hit the premise and split pipeline concepts and then directed folks to this site for more!
You can find the slides here.