Jenkins Pipeline for Continuous Delivery
Jenkins Pipeline is a powerful suite of plugins that lets you implement continuous delivery pipelines as code. In this guide, we'll explore everything you need to know about Jenkins Pipeline - from basic concepts to advanced implementations.
Table of contents
- What is Jenkins Pipeline?
- Declarative vs Scripted Pipeline
- Getting started with Pipeline
- Pipeline concepts
- Declarative Pipeline syntax
- Scripted Pipeline syntax
- Advanced Pipeline techniques
- Pipeline as code best practices
- Monitoring your Pipeline
- Troubleshooting common issues
What is Jenkins Pipeline?
Jenkins Pipeline represents a significant evolution in the way we define and manage continuous integration and delivery workflows. At its core, Jenkins Pipeline is a set of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins.
A continuous delivery pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Every code change goes through multiple stages - building, testing, and deployment. Pipeline makes this process transparent, reliable, and repeatable.
I've been using Jenkins for nearly a decade now, and I can tell you that Pipeline has transformed how teams approach CI/CD. Before Pipeline, we'd string together a bunch of freestyle jobs with various triggers and dependencies. It worked, but it was messy and hard to maintain. Pipeline changed all that.
What makes Pipeline special is that it treats your delivery pipeline as code. You define your pipeline in a text file called a Jenkinsfile, which can be committed alongside your project's source code. This approach brings several benefits:
- Version control - Your pipeline definition lives with your code and evolves with it
- Code review - Team members can review and suggest improvements to the pipeline
- Durability - Pipelines can survive Jenkins controller restarts
- Pausability - You can pause execution and wait for human input
- Versatility - Complex workflows with forks, joins, and parallel execution are supported
- Extensibility - The Pipeline plugin system is highly extensible
Unlike traditional Jenkins jobs, Pipeline provides a more flexible, configurable, and powerful way to build, test, and deploy your applications.
Declarative vs Scripted Pipeline
Jenkins offers two syntaxes for defining your Pipeline: Declarative and Scripted. This is probably one of the most confusing aspects for newcomers to Pipeline, so let me clarify the differences.
Declarative Pipeline
Declarative Pipeline was introduced to provide a simpler, more structured way to write your pipelines. It uses a predefined structure and specific syntax that makes pipelines easier to read and write.
Here's a basic example of a Declarative Pipeline:
agent any
stages {
stage('Build') {
steps {
sh 'make'
}
}
stage('Test') {
steps {
sh 'make check'
junit 'reports/**/*.xml'
}
}
stage('Deploy') {
steps {
sh 'make publish'
}
}
}
}
Notice how the entire pipeline is enclosed within a pipeline
block. The structure is rigid but clear - you define an agent, stages, and steps within those stages.
Scripted Pipeline
Scripted Pipeline, on the other hand, offers a more flexible approach using Groovy scripting. It was the original syntax for Jenkins Pipeline and gives you the power of a full programming language.
Here's the same pipeline written in Scripted syntax:
stage('Build') {
sh 'make'
}
stage('Test') {
sh 'make check'
junit 'reports/**/*.xml'
}
if (currentBuild. currentResult == 'SUCCESS') {
stage('Deploy') {
sh 'make publish'
}
}
}
Scripted Pipeline starts with a node
block rather than a pipeline
block, and the overall structure is more flexible.
Which should you choose?
In my experience, Declarative Pipeline is the better choice for most teams and projects. It's more structured, easier to read, and has better error reporting. The predefined structure helps ensure consistency across projects and teams.
Scripted Pipeline makes sense if:
- You need complex logic that's difficult to express in Declarative syntax
- Your team is very comfortable with Groovy programming
- You're maintaining legacy pipelines that were written in Scripted syntax
I started with Scripted Pipeline years ago because Declarative didn't exist yet. These days, I almost always reach for Declarative first, and only use Scripted sections (via the script
block) when I need custom logic that's too complex for Declarative syntax.
Getting started with Pipeline
Let's walk through setting up your first Jenkins Pipeline. You have several options for creating a Pipeline in Jenkins:
- Through the classic Jenkins UI
- Through Blue Ocean (Jenkins' newer UI)
- By creating a Jenkinsfile in your source code repository
Using the classic Jenkins UI
- From the Jenkins dashboard, click "New Item"
- Enter a name for your Pipeline
- Select "Pipeline" as the item type and click "OK"
- Scroll down to the Pipeline section
- Write your Pipeline script directly in the script text area or select "Pipeline script from SCM"
- Click "Save"
Using Blue Ocean
- Click on "Open Blue Ocean" in the left navigation
- Click "Create a new Pipeline"
- Choose where your code is stored (GitHub, Bitbucket, etc.)
- Select your repository and follow the instructions
- Blue Ocean will help you create your first Pipeline
Creating a Jenkinsfile (recommended)
This is my preferred approach. Simply create a file named Jenkinsfile
in the root of your source code repository:
agent any
stages {
stage('Hello') {
steps {
echo 'Hello World'
}
}
}
}
This basic Pipeline simply prints "Hello World" to the console. To use this Jenkinsfile:
- Commit it to your repository
- In Jenkins, create a new Pipeline job
- In the Pipeline section, select "Pipeline script from SCM"
- Enter your repository details
- Save and run your Pipeline
Pipeline concepts
Before diving deeper into syntax, let's understand some key Pipeline concepts:
Pipeline
A Pipeline is your entire continuous delivery process defined as code. It's the top-level construct that contains all the stages, steps, and other elements needed to express how your application should be built, tested, and deployed.
Node
A node represents a machine that can execute your Pipeline. When you specify agent any
in a Declarative Pipeline, you're telling Jenkins to use any available node to run your Pipeline.
Stage
Stages organize your Pipeline into distinct sections, typically representing different phases of your delivery process (build, test, deploy, etc.). Stages help visualize the Pipeline progress and make it easier to understand where things went wrong if there's a failure.
Step
Steps are the actual work performed in your Pipeline. They tell Jenkins what to do at a particular point in time. For example, sh 'make'
is a step that executes the shell command "make".
These concepts apply to both Declarative and Scripted Pipeline, though how you express them differs between the two syntaxes.
Declarative Pipeline syntax
Let's explore Declarative Pipeline syntax in more detail. A Declarative Pipeline must always start with the pipeline
block and contain specific sections and directives.
Sections in Declarative Pipeline
agent
The agent
section specifies where the Pipeline will execute. It can be defined at the top level or within a stage.
agent any // Run on any available agent
// or
agent none // No global agent, require agents in each stage
// or
agent {
label 'my-agent-label' // Run on an agent with this label
}
// or
agent {
docker 'maven:3.9.3- eclipse-temurin-17' // Run in this Docker container
}
}
stages
The stages
section contains one or more stage
directives. This is where most of your Pipeline work happens.
agent any
stages {
stage('Build') {
}
stage('Test') {
}
}
}
steps
The steps
section defines the actions to take in a stage. Every stage must have a steps section.
steps {
sh 'mvn clean package'
}
}
post
The post
section defines actions to take after the Pipeline or a stage has completed. It supports several conditions like always
, success
, failure
, etc.
agent any
stages {
stage('Test') {
steps {
sh 'make check'
}
}
}
post {
always {
junit 'reports/**/*.xml'
}
failure {
mail to: '[email protected]',
subject: "Failed Pipeline: ${currentBuild. fullDisplayName}",
body: "Something is wrong with ${env.BUILD_URL}"
}
}
}
Directives in Declarative Pipeline
environment
The environment
directive sets environment variables for the Pipeline or a specific stage.
agent any
environment {
CC = 'clang'
}
stages {
stage('Build') {
environment {
DEBUG_FLAGS = '-g'
}
steps {
sh 'make'
}
}
}
}
options
The options
directive provides Pipeline-specific options, such as build discarding, timeout, etc.
agent any
options {
timeout(time: 1, unit: 'HOURS')
disableConcurrentBuilds()
}
stages {
// ...
}
}
parameters
The parameters
directive defines parameters that a user can provide when running the Pipeline.
agent any
parameters {
string(name: 'DEPLOY_ENV', defaultValue: 'staging', description: 'Deployment environment')
booleanParam(name: 'RUN_TESTS', defaultValue: true, description: 'Run tests')
}
stages {
stage('Deploy') {
when {
expression { params.DEPLOY_ENV == 'production' }
}
steps {
echo "Deploying to ${params. DEPLOY_ENV}"
}
}
}
}
triggers
The triggers
directive defines how the Pipeline should be triggered automatically.
agent any
triggers {
cron('H */4 * * 1-5') // Run every 4 hours on weekdays
pollSCM ('H */4 * * 1-5') // Check for SCM changes every 4 hours on weekdays
}
stages {
// ...
}
}
when
The when
directive allows conditional execution of a stage based on specified conditions.
when {
branch 'main'
environment name: 'DEPLOY_TO', value: 'production'
}
steps {
// Deploy to production
}
}
Matrix
Matrix is a powerful feature in Declarative Pipeline that allows you to generate multiple parallel stage executions with different combinations of variables. It's especially useful for testing on multiple platforms or with different configurations.
agent none
stages {
stage('Test') {
matrix {
axes {
axis {
name 'PLATFORM'
values 'linux', 'windows', 'mac'
}
axis {
name 'BROWSER'
values 'chrome', 'firefox', 'safari'
}
}
stages {
stage('Test') {
steps {
echo "Testing on ${PLATFORM} with ${BROWSER}"
}
}
}
}
}
}
This will run 9 parallel stages, one for each combination of platform and browser.
Scripted Pipeline syntax
Scripted Pipeline gives you the full power of Groovy scripting. Unlike Declarative Pipeline, it doesn't have a predefined structure - you can write your Pipeline as a regular Groovy script.
Basic structure
A Scripted Pipeline typically begins with a node
block:
// Your pipeline code goes here
}
Stages and steps
In Scripted Pipeline, stages are optional but recommended for visualization:
stage('Build') {
// Build steps
}
stage('Test') {
// Test steps
}
}
Steps are the same as in Declarative Pipeline, but you can mix them with Groovy code:
stage('Build') {
sh 'make'
def files = findFiles(glob: '*.txt')
for (file in files) {
echo "Found file: ${file.name}"
}
}
Flow control
Scripted Pipeline allows you to use standard Groovy flow control constructs:
stage('Build') {
sh 'make'
}
stage('Test') {
sh 'make check'
}
if (currentBuild. resultIsBetter OrEqualTo ('SUCCESS')) {
stage('Deploy') {
if ( env.BRANCH_NAME == 'main') {
sh 'make deploy-prod'
} else {
sh 'make deploy-dev'
}
}
}
}
Exception handling
You can use try/catch/finally blocks for error handling:
stage('Example') {
try {
sh 'exit 1' // This will fail
} catch (exc) {
echo 'Something failed, but I will continue'
} finally {
echo 'This will always run'
}
}
}
Differences from regular Groovy
While Scripted Pipeline looks like regular Groovy, there are some important differences:
-
Serialization - Jenkins needs to serialize the Pipeline state to disk to support durability across restarts. Not all Groovy constructs can be serialized.
-
CPS Transformation - Pipeline code is transformed using the Continuation Passing Style (CPS) to enable features like pausing and resuming execution. This can affect how certain Groovy idioms behave.
For example, some common Groovy collection methods might not work as expected:
def list = [1, 2, 3]
list.each { item ->
echo "Item: ${item}"
}
// Use a for loop instead
for (item in list) {
echo "Item: ${item}"
}
Advanced Pipeline techniques
Now that we've covered the basics, let's look at some advanced techniques you can use in your Pipelines.
Shared Libraries
Shared Libraries allow you to define reusable Pipeline code that can be shared across multiple projects. This is essential for large organizations with many similar projects.
To use a Shared Library:
- Define the library in Jenkins global configuration
- Reference it in your Pipeline:
pipeline {
agent any
stages {
stage('Example') {
steps {
// Use a function from the shared library
sayHello 'World'
}
}
}
Parallel execution
You can run stages in parallel to speed up your Pipeline:
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Branch A') {
steps {
echo 'On Branch A'
}
}
stage('Branch B') {
steps {
echo 'On Branch B'
}
}
}
}
}
}
Using Docker
Jenkins Pipeline has excellent Docker integration, allowing you to run steps inside Docker containers:
agent any
stages {
stage('Test') {
agent {
docker { image 'node:16-alpine' }
}
steps {
sh 'node --version'
sh 'npm install'
sh 'npm test'
}
}
}
}
You can even use different containers for different stages:
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3.9.3- eclipse-temurin-17' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:16-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
Handling credentials
Jenkins provides secure ways to handle credentials in your Pipeline:
agent any
stages {
stage('Deploy') {
environment {
// Access credentials by ID
AWS_CREDS = credentials ('aws-key')
}
steps {
sh 'aws s3 ls'
}
}
}
}
For username/password credentials, Jenkins automatically sets environment variables:
GITHUB_CREDS = credentials('github-credentials')
// This sets:
// GITHUB_CREDS - contains "username:password"
// GITHUB_CREDS_USR - contains the username
// GITHUB_CREDS_PSW - contains the password
}
Input steps
You can pause your Pipeline and wait for user input:
agent any
stages {
stage('Deploy to Production') {
input {
message "Deploy to production?"
ok "Yes, deploy it!"
parameters {
choice(name: 'TARGET_ENV', choices: ['prod-east', 'prod-west'], description: 'Target environment')
}
}
steps {
echo "Deploying to $ {TARGET_ENV}"
}
}
}
}
Pipeline as code best practices
Having worked with numerous Jenkins Pipelines, I've accumulated a few best practices that can save you a lot of headaches:
Keep your Pipeline code in version control
Always store your Jenkinsfile in your source code repository. This ensures that changes to your Pipeline are versioned, reviewed, and tied to specific code versions.
Make Pipelines readable
Use meaningful stage names and comments to make your Pipeline readable. Your future self (and your teammates) will thank you.
agent any
stages {
stage('Build') {
// Build the application and create artifacts
steps {
sh 'make build'
}
}
// ... more stages
}
}
Fail fast
Configure your Pipeline to fail as quickly as possible when issues arise. This saves time and resources.
agent any
options {
skipStagesAfterUnstable ()
}
stages {
// ... stages
}
}
Use timeouts
Always set timeouts to prevent hung Pipelines from consuming resources indefinitely:
agent any
options {
timeout(time: 1, unit: 'HOURS')
}
stages {
// ... stages
}
}
Archive artifacts and test results
Make sure to archive build artifacts and publish test results:
steps {
sh 'make test'
}
post {
always {
junit 'test-results/**/*.xml'
archiveArtifacts artifacts: 'build/libs/**/*.jar', fingerprint: true
}
}
}
Extract complex logic to Shared Libraries
If your Pipeline contains complex logic, consider moving it to a Shared Library:
def call(Map config) {
// Complex deployment logic here
}
// In your Jenkinsfile
pipeline {
agent any
stages {
stage('Deploy') {
steps {
myDeployFunction (environment: 'production', region: 'us-east-1')
}
}
}
}
Monitoring your Pipeline
Effective monitoring is crucial for maintaining healthy Jenkins Pipelines. Jenkins provides several ways to monitor your Pipelines:
Pipeline Stage View
The Pipeline Stage View plugin provides a visual representation of your Pipeline stages, making it easy to see the status and duration of each stage.
Blue Ocean
Blue Ocean offers a modern, visual way to view your Pipelines. It provides intuitive visualizations of Pipeline runs, branches, and parallel stages.
Jenkins Dashboard
The standard Jenkins dashboard gives you an overview of all your Pipelines and their status.
Email notifications
Configure email notifications to alert team members about Pipeline failures:
failure {
mail to: '[email protected]',
subject: "Failed Pipeline: ${currentBuild. fullDisplayName}",
body: "Pipeline failed at ${env.BUILD_URL}"
}
}
Integration with external monitoring tools
For comprehensive monitoring, consider integrating Jenkins with external tools like Odown. Odown can monitor your Jenkins instance and notify you if it becomes unresponsive, ensuring that your CI/CD pipeline remains available and functioning correctly.
Troubleshooting common issues
Even the best-designed Pipelines can encounter issues. Here are some common problems and their solutions:
Pipeline hangs
If your Pipeline seems to hang:
- Check for infinite loops in your code
- Ensure that any input steps are being answered
- Verify that external services your Pipeline depends on are available
- Set appropriate timeouts
Out of memory errors
If you see memory-related errors:
- Increase the memory allocated to Jenkins
- Clean up workspaces after builds
- Optimize your build process to use less memory
Pipeline script errors
For syntax or script errors:
- Use the Pipeline Syntax Generator to help write correct Pipeline code
- Test your Jenkinsfile syntax using the Jenkins CLI command
declare-pipeline
- Break down complex Pipelines into smaller, testable pieces
Agent connectivity issues
If agents can't connect or drop connection:
- Check network connectivity between the Jenkins controller and agents
- Verify agent configurations
- Look for resource constraints on the agents
Credentials issues
For credentials-related problems:
- Verify that the credentials exist in Jenkins
- Check that the credential ID is correct
- Ensure that the Jenkins user has access to the credential
- Use the credentials binding step correctly
Conclusion
Jenkins Pipeline offers a powerful, flexible way to define your continuous delivery workflows as code. Whether you choose Declarative or Scripted syntax, the ability to version, review, and reuse your Pipeline definitions brings significant benefits to your software delivery process.
In this guide, we've covered the basics of Jenkins Pipeline, explored both Declarative and Scripted syntax, and looked at advanced techniques and best practices. Armed with this knowledge, you should be able to create robust, efficient Pipelines for your projects.
To ensure your Jenkins instance and Pipelines are always available, consider using monitoring tools like Odown. Odown can help you track your Jenkins uptime, monitor your SSL certificates, and provide public status pages for your CI/CD infrastructure. With proper monitoring, you can quickly identify and resolve issues before they impact your delivery process.
Remember, effective Pipelines are not just about automation; they're about creating a reliable, consistent path from code to production that your entire team can understand and maintain.