As shown in the previous blog post, Jenkins enables you to write your own pipeline code, which can be shared among all pipelines in your Jenkins instance.
This post is part of a Series in which I elaborate on my best practices for running Jenkins at scale which might benefit Agile teams and CI/CD efforts.
Where we’ve focussed on custom steps previously, I’ll now demonstrate how to create and use full declarative pipelines from your Shared Library.
This way you can minimize the amount of duplicated code in your projects by getting nearly all pipeline configuration from a central location.
The main advantage of this is that you’ll have minimal code replication, as the pipelines in your individual artefacts (e.g. services, applications, projects, jars, etc.) will not contain the full pipeline, but only a reference to it.
This way, you’ll avoid having to (manually) modify all variations of the pipeline whenever it changes. It also prevents the users of the pipeline making changes to it which you wouldn’t want to happen: like avoiding quality assurance steps or even promoting to environments without consent or the correct procedures.
Basic setup
Running a declarative pipeline from your Shared Library requires some preparation, and you’ll need to have these following things in place:
- Your artefact needs to have a Jenkinsfile in its source tree
- Your Shared Library needs to have the complete pipeline in the vars-directory
Pipeline in library
Jenkins requires you to have complete declarative pipelines in your Library, you cannot mix and match stages or parts of your pipeline from different globally available steps. It all needs to go in one file and has to live in the vars
directory.
An example of a declarative pipeline is below:
// vars/defaultUtilityPipeline.groovy // This pipeline requires no parameters as input. def call(Map pipelineParams) { pipeline { agent none stages { stage('Build and Unit test') { agent { label 'maven' } steps { script { module_Maven('clean verify') } } post { always { junit testResults: '**/target/surefire-reports/*.xml', allowEmptyResults: false } } } stage('Publish to Nexus') { agent { label 'maven' } steps { script { echo "This is where we publish to Nexus" module_Artifact.publish() } } } } post { always { script { module_Notification.sendEmail(currentBuild.result) } } } } }
This pipeline has two phases, ‘build’ and ‘publish’, and a final phase in which it sends an email, to notify you about the results of a particular build.
The eagle-eyed readers will have spotted that this entire pipeline is wrapped in a call
method, which has a Map of parameters as its input. This is not currently used but can be as required.
Jenkinsfile in repository of the artefact
In order to also use this pipeline, you need to reference (call) it from the Jenkinsfile in the repository of your artefact. This can be as easy as it seems: just a reference to your Library, and a single call to the file containing the pipeline.
This should be as follows:
@Library('shared-libs') _ defaultUtilityPipeline([:])
Please note that the two square brackets with a colon in between are Groovy’s way of initialising an empty Map and this is, therefore, a valid call.
Jenkins
So, how would one use this setup, you might ask yourself.
I’d recommend using the excellent Multibranch Pipeline. It is by default bundled with Jenkins and offers, among many other things, automatic checkouts of your sourcecode, running your pipeline on (feature) branches and a comprehensive overview of the status of all branches of interest of your artefact.
To create one, click on ‘New Item’ in Jenkins, and select the type Multibranch Pipeline from the list.
After that, select the repository of your artefact in the configuration, and save the configuration. It is that easy!
A second possibility would be to run a regular pipeline (New Item -> Pipeline) but this will not give you the possibility to run on more than one branch from the same Jenkins Job.
Conclusion
To minimize the effort of maintaining a ‘standard’ pipeline might be in use by many artefacts (or Jenkins Jobs), it could be considered good practice to centralise the pipeline itself, and only call it from the Jenkinsfiles inside the artefacts repositories.
This way, maintenance on the pipeline or (functional) changes of this pipeline can be done centrally (only once) and are automatically applied to all jobs with this pipeline in use, thus eliminating manually modifying all occurrences which is highly error-prone.
In the next post I’ll start testing our library!