Working with customers, I notice some are rather hesitant to start using S2I builds (at least initially), because they may have an existing build system that used to serve all their needs up until that point.
It is important to remember that OpenShift does not impose the use of S2I from source on developers - it is just one of the possibilities. There are Dockerfile builds (the smarter alternative, buildah, is just about to come up as a supported build method), but also something we usually refer to as binary builds.
This last option, in addition to making it possible to add binary run-time dependencies after a source build, allows for easy integration with existing build systems outside of OpenShift, which may make their artefacts available in Jenkins, Nexus, Artifactory, or another similar tool.
If you are using (or have considered) such an approach, how do you handle builds and releases across pipeline stages? How does your builder pod "know" what artefacts to pull in a dev build? Do you then tag and stage container images, or do you rebuild for each stage? If your production is a separate cluster, how do you hand-off for promotion?
Do you have a strategy to migrate to source builds eventually?
When talking to customers, it is important to make them aware that docker builds comes with security issues -- anyone with access to the 'docker' command is essebtialy given root privileges. OpenShift S2I and Buidlah are solitions for building container images with regular user privileges.
But, adding this topic question, which is the 'binary' artifact you move though the pipeline, from dev to QA to prod? Is it the application executable, such as a WAR file for Java applications? Or the container image?
Red Hat
Learning Community
A collaborative learning environment, enabling open source skill development.