Pipelines

Overview

A Pipeline is a collection of Tasks that you define and arrange in a specific order of execution as part of your continuous integration flow. Each Task in a Pipeline executes as a Pod on your Kubernetes cluster. You can configure various execution conditions to fit your business needs.

Configuring a Pipeline

A Pipeline definition supports the following fields:

  • Required:
    • apiVersion - Specifies the API version, for example tekton.dev/v1beta1.
    • kind - Identifies this resource object as a Pipeline object.
    • metadata - Specifies metadata that uniquely identifies the Pipeline object. For example, a name.
    • spec - Specifies the configuration information for this Pipeline object. This must include:
      • tasks - Specifies the Tasks that comprise the Pipeline and the details of their execution.
  • Optional:
    • resources - deprecated Specifies PipelineResources needed or created by the Tasks comprising the Pipeline.
    • params - Specifies the Parameters that the Pipeline requires.
    • workspaces - Specifies a set of Workspaces that the Pipeline requires.
    • tasks:
      • name - the name of this Task within the context of this Pipeline.
      • taskRef - a reference to a Task definition.
      • taskSpec - a specification of a Task.
      • resources - Specifies the PipelineResource that a Task requires.
      • runAfter - Indicates that a Task should execute after one or more other Tasks without output linking.
      • retries - Specifies the number of times to retry the execution of a Task after a failure. Does not apply to execution cancellations.
      • when - Specifies when expressions that guard the execution of a Task; allow execution only when all when expressions evaluate to true.
      • timeout - Specifies the timeout before a Task fails.
      • params - Specifies the Parameters that a Task requires.
      • workspaces - Specifies the Workspaces that a Task requires.
      • matrix - Specifies the Parameters used to fan out a Task into multiple TaskRuns or Runs.
    • results - Specifies the location to which the Pipeline emits its execution results.
    • description - Holds an informative description of the Pipeline object.
    • finally - Specifies one or more Tasks to be executed in parallel after all other tasks have completed.
      • name - the name of this Task within the context of this Pipeline.
      • taskRef - a reference to a Task definition.
      • taskSpec - a specification of a Task.
      • retries - Specifies the number of times to retry the execution of a Task after a failure. Does not apply to execution cancellations.
      • when - Specifies when expressions that guard the execution of a Task; allow execution only when all when expressions evaluate to true.
      • timeout - Specifies the timeout before a Task fails.
      • params - Specifies the Parameters that a Task requires.
      • workspaces - Specifies the Workspaces that a Task requires.
      • matrix - Specifies the Parameters used to fan out a Task into multiple TaskRuns or Runs.

Specifying Resources

⚠️ PipelineResources are deprecated.

Consider using replacement features instead. Read more in documentation and TEP-0074.

A Pipeline requires PipelineResources to provide inputs and store outputs for the Tasks that comprise it. You can declare those in the resources field in the spec section of the Pipeline definition. Each entry requires a unique name and a type. For example:

spec:
  resources:
    - name: my-repo
      type: git
    - name: my-image
      type: image

Specifying Workspaces

Workspaces allow you to specify one or more volumes that each Task in the Pipeline requires during execution. You specify one or more Workspaces in the workspaces field. For example:

spec:
  workspaces:
    - name: pipeline-ws1 # The name of the workspace in the Pipeline
  tasks:
    - name: use-ws-from-pipeline
      taskRef:
        name: gen-code # gen-code expects a workspace with name "output"
      workspaces:
        - name: output
          workspace: pipeline-ws1
    - name: use-ws-again
      taskRef:
        name: commit # commit expects a workspace with name "src"
      runAfter:
        - use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first
      workspaces:
        - name: src
          workspace: pipeline-ws1

For simplicity you can also map the name of the Workspace in PipelineTask to match with the Workspace from the Pipeline. For example:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: pipeline
spec:
  workspaces:
    - name: source
  tasks:
    - name: gen-code
      taskRef:
        name: gen-code # gen-code expects a Workspace named "source"
      workspaces:
        - name: source # <- mapping workspace name
    - name: commit
      taskRef:
        name: commit # commit expects a Workspace named "source"
      workspaces:
        - name: source # <- mapping workspace name
      runAfter:
        - gen-code

For more information, see:

Specifying Parameters

(See also Specifying Parameters in Tasks)

You can specify global parameters, such as compilation flags or artifact names, that you want to supply to the Pipeline at execution time. Parameters are passed to the Pipeline from its corresponding PipelineRun and can replace template values specified within each Task in the Pipeline.

Parameter names:

  • Must only contain alphanumeric characters, hyphens (-), and underscores (_).
  • Must begin with a letter or an underscore (_).

For example, fooIs-Bar_ is a valid parameter name, but barIsBa$ or 0banana are not.

Each declared parameter has a type field, which can be set to either array or string. array is useful in cases where the number of compilation flags being supplied to the Pipeline varies throughout its execution. If no value is specified, the type field defaults to string. When the actual parameter value is supplied, its parsed type is validated against the type field. The description and default fields for a Parameter are optional.

The following example illustrates the use of Parameters in a Pipeline.

The following Pipeline declares two input parameters :

  • context which passes its value (a string) to the Task to set the value of the pathToContext parameter within the Task.
  • flags which passes its value (an array) to the Task to set the value of the flags parameter within the Task. The flags parameter within the Task must also be an array. If you specify a value for the default field and invoke this Pipeline in a PipelineRun without specifying a value for context, that value will be used.

Note: Input parameter values can be used as variables throughout the Pipeline by using variable substitution.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: pipeline-with-parameters
spec:
  params:
    - name: context
      type: string
      description: Path to context
      default: /some/where/or/other
    - name: flags
      type: array
      description: List of flags
  tasks:
    - name: build-skaffold-web
      taskRef:
        name: build-push
      params:
        - name: pathToDockerFile
          value: Dockerfile
        - name: pathToContext
          value: "$(params.context)"
        - name: flags
          value: ["$(params.flags[*])"]

The following PipelineRun supplies a value for context:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: pipelinerun-with-parameters
spec:
  pipelineRef:
    name: pipeline-with-parameters
  params:
    - name: "context"
      value: "/workspace/examples/microservices/leeroy-web"
    - name: "flags"
      value:
        - "foo"
        - "bar"

Adding Tasks to the Pipeline

Your Pipeline definition must reference at least one Task. Each Task within a Pipeline must have a valid name and a taskRef or a taskSpec. For example:

tasks:
  - name: build-the-image
    taskRef:
      name: build-push

or

tasks:
  - name: say-hello
    taskSpec:
      steps:
      - image: ubuntu
        script: echo 'hello there'

Note that any task specified in taskSpec will be the same version as the Pipeline.

Specifying Resources in PipelineTasks

You can use PipelineResources as inputs and outputs for Tasks in the Pipeline. For example:

spec:
  tasks:
    - name: build-the-image
      taskRef:
        name: build-push
      resources:
        inputs:
          - name: workspace
            resource: my-repo
        outputs:
          - name: image
            resource: my-image

⚠️ PipelineResources are deprecated.

Consider using replacement features instead. Read more in documentation and TEP-0074.

Specifying Parameters in PipelineTasks

You can also provide Parameters:

spec:
  tasks:
    - name: build-skaffold-web
      taskRef:
        name: build-push
      params:
        - name: pathToDockerFile
          value: Dockerfile
        - name: pathToContext
          value: /workspace/examples/microservices/leeroy-web

Specifying Matrix in PipelineTasks

🌱 Matrix is an alpha feature. The enable-api-fields feature flag must be set to "alpha" to specify Matrix in a PipelineTask.

⚠️ This feature is in a preview mode. It is still in a very early stage of development and is not yet fully functional.

You can also provide Parameters through the matrix field:

spec:
  tasks:
    - name: browser-test
      taskRef:
        name: browser-test
      matrix:
        params:
        - name: browser
          value:
          - chrome
          - safari
          - firefox

For further information, read Matrix.

Specifying Workspaces in PipelineTasks

You can also provide Workspaces:

spec:
  tasks:
    - name: use-workspace
      taskRef:
        name: gen-code # gen-code expects a workspace with name "output"
      workspaces:
        - name: output
          workspace: shared-ws

Tekton Bundles

Note: This is only allowed if enable-tekton-oci-bundles is set to "true" in the feature-flags configmap, see install.md

You may also specify your Task reference using a Tekton Bundle. A Tekton Bundle is an OCI artifact that contains Tekton resources like Tasks which can be referenced within a taskRef.

There is currently a hard limit of 20 objects in a bundle.

spec:
  tasks:
    - name: hello-world
      taskRef:
        name: echo-task
        bundle: docker.com/myrepo/mycatalog

Here, the bundle field is the full reference url to the artifact. The name is the metadata.name field of the Task.

You may also specify a tag as you would with a Docker image which will give you a fixed, repeatable reference to a Task.

spec:
  tasks:
    - name: hello-world
      taskRef:
        name: echo-task
        bundle: docker.com/myrepo/mycatalog:v1.0.1

You may also specify a fixed digest instead of a tag.

spec:
  tasks:
    - name: hello-world
      taskRef:
        name: echo-task
        bundle: docker.io/myrepo/mycatalog@sha256:abc123

Any of the above options will fetch the image using the ImagePullSecrets attached to the ServiceAccount specified in the PipelineRun. See the Service Account section for details on how to configure a ServiceAccount on a PipelineRun. The PipelineRun will then run that Task without registering it in the cluster allowing multiple versions of the same named Task to be run at once.

Tekton Bundles may be constructed with any toolsets that produce valid OCI image artifacts so long as the artifact adheres to the contract.

Using the from field

If a Task in your Pipeline needs to use the output of a previous Task as its input, use the optional from field to specify a list of Tasks that must execute before the Task that requires their outputs as its input. When your target Task executes, only the version of the desired PipelineResource produced by the last Task in this list is used. The name of this output PipelineResource output must match the name of the input PipelineResource specified in the Task that ingests it.

In the example below, the deploy-app Task ingests the output of the build-app Task named my-image as its input. Therefore, the build-app Task will execute before the deploy-app Task regardless of the order in which those Tasks are declared in the Pipeline.

- name: build-app
  taskRef:
    name: build-push
  resources:
    outputs:
      - name: image
        resource: my-image
- name: deploy-app
  taskRef:
    name: deploy-kubectl
  resources:
    inputs:
      - name: image
        resource: my-image
        from:
          - build-app

⚠️ PipelineResources are deprecated.

Consider using replacement features instead. Read more in documentation and TEP-0074.

Using the runAfter field

If you need your Tasks to execute in a specific order within the Pipeline but they don’t have resource dependencies that require the from field, use the runAfter field to indicate that a Task must execute after one or more other Tasks.

In the example below, we want to test the code before we build it. Since there is no output from the test-app Task, the build-app Task uses runAfter to indicate that test-app must run before it, regardless of the order in which they are referenced in the Pipeline definition.

workspaces:
- name: source
tasks:
- name: test-app
  taskRef:
    name: make-test
  workspaces:
  - name: source
    workspace: source
- name: build-app
  taskRef:
    name: kaniko-build
  runAfter:
    - test-app
  workspaces:
  - name: source
    workspace: source

⚠️ PipelineResources are deprecated.

Consider using replacement features instead. Read more in documentation and TEP-0074.

Using the retries field

For each Task in the Pipeline, you can specify the number of times Tekton should retry its execution when it fails. When a Task fails, the corresponding TaskRun sets its Succeeded Condition to False. The retries field instructs Tekton to retry executing the Task when this happens. retries are executed even when other Tasks in the Pipeline have failed, unless the PipelineRun has been cancelled or gracefully cancelled.

If you expect a Task to encounter problems during execution (for example, you know that there will be issues with network connectivity or missing dependencies), set its retries field to a suitable value greater than 0. If you don’t explicitly specify a value, Tekton does not attempt to execute the failed Task again.

In the example below, the execution of the build-the-image Task will be retried once after a failure; if the retried execution fails, too, the Task execution fails as a whole.

tasks:
  - name: build-the-image
    retries: 1
    taskRef:
      name: build-push

Guard Task execution using when expressions

To run a Task only when certain conditions are met, it is possible to guard task execution using the when field. The when field allows you to list a series of references to when expressions.

The components of when expressions are input, operator and values:

  • input is the input for the when expression which can be static inputs or variables (Parameters or Results). If the input is not provided, it defaults to an empty string.
  • operator represents an input’s relationship to a set of values. A valid operator must be provided, which can be either in or notin.
  • values is an array of string values. The values array must be provided and be non-empty. It can contain static values or variables (Parameters, Results or a Workspaces’s bound state).

The Parameters are read from the Pipeline and Results are read directly from previous Tasks. Using Results in a when expression in a guarded Task introduces a resource dependency on the previous Task that produced the Result.

The declared when expressions are evaluated before the Task is run. If all the when expressions evaluate to True, the Task is run. If any of the when expressions evaluate to False, the Task is not run and the Task is listed in the Skipped Tasks section of the PipelineRunStatus.

In these examples, first-create-file task will only be executed if the path parameter is README.md, echo-file-exists task will only be executed if the exists result from check-file task is yes and run-lint task will only be executed if the lint-config optional workspace has been provided by a PipelineRun.

tasks:
  - name: first-create-file
    when:
      - input: "$(params.path)"
        operator: in
        values: ["README.md"]
    taskRef:
      name: first-create-file
---
tasks:
  - name: echo-file-exists
    when:
      - input: "$(tasks.check-file.results.exists)"
        operator: in
        values: ["yes"]
    taskRef:
      name: echo-file-exists
---
tasks:
  - name: run-lint
    when:
      - input: "$(workspaces.lint-config.bound)"
        operator: in
        values: ["true"]
    taskRef:
      name: lint-source
---
tasks:
  - name: deploy-in-blue
    when:
      - input: "blue"
        operator: in
        values: ["$(params.deployments[*])"]
    taskRef:
      name: deployment

For an end-to-end example, see PipelineRun with when expressions.

There are a lot of scenarios where when expressions can be really useful. Some of these are:

  • Checking if the name of a git branch matches
  • Checking if the Result of a previous Task is as expected
  • Checking if a git file has changed in the previous commits
  • Checking if an image exists in the registry
  • Checking if the name of a CI job matches
  • Checking if an optional Workspace has been provided

Guarding a Task and its dependent Tasks

To guard a Task and its dependent Tasks:

  • cascade the when expressions to the specific dependent Tasks to be guarded as well
  • compose the Task and its dependent Tasks as a unit to be guarded and executed together using Pipelines in Pipelines
Cascade when expressions to the specific dependent Tasks

Pick and choose which specific dependent Tasks to guard as well, and cascade the when expressions to those Tasks.

Taking the use case below, a user who wants to guard manual-approval and its dependent Tasks:

                                     tests
                                       |
                                       v
                                 manual-approval
                                 |            |
                                 v        (approver)
                            build-image       |
                                |             v
                                v          slack-msg
                            deploy-image

The user can design the Pipeline to solve their use case as such:

tasks:
...
- name: manual-approval
  runAfter:
    - tests
  when:
    - input: $(params.git-action)
      operator: in
      values:
        - merge
  taskRef:
    name: manual-approval

- name: build-image
  when:
    - input: $(params.git-action)
      operator: in
      values:
        - merge
  runAfter:
    - manual-approval
  taskRef:
    name: build-image

- name: deploy-image
  when:
    - input: $(params.git-action)
      operator: in
      values:
        - merge
  runAfter:
    - build-image
  taskRef:
    name: deploy-image

- name: slack-msg
  params:
    - name: approver
      value: $(tasks.manual-approval.results.approver)
  taskRef:
    name: slack-msg
Compose using Pipelines in Pipelines

Compose a set of Tasks as a unit of execution using Pipelines in Pipelines, which allows for guarding a Task and its dependent Tasks (as a sub-Pipeline) using when expressions.

Note: Pipelines in Pipelines is an experimental feature

Taking the use case below, a user who wants to guard manual-approval and its dependent Tasks:

                                     tests
                                       |
                                       v
                                 manual-approval
                                 |            |
                                 v        (approver)
                            build-image       |
                                |             v
                                v          slack-msg
                            deploy-image

The user can design the Pipelines to solve their use case as such:

## sub pipeline (approve-build-deploy-slack)
tasks:
  - name: manual-approval
    runAfter:
      - integration-tests
    taskRef:
      name: manual-approval

  - name: build-image
    runAfter:
      - manual-approval
    taskRef:
      name: build-image

  - name: deploy-image
    runAfter:
      - build-image
    taskRef:
      name: deploy-image

  - name: slack-msg
    params:
      - name: approver
        value: $(tasks.manual-approval.results.approver)
    taskRef:
      name: slack-msg

---
## main pipeline
tasks:
...
- name: approve-build-deploy-slack
  runAfter:
    - tests
  when:
    - input: $(params.git-action)
      operator: in
      values:
        - merge
  taskRef:
    apiVersion: tekton.dev/v1beta1
    kind: Pipeline
    name: approve-build-deploy-slack

Guarding a Task only

When when expressions evaluate to False, the Task will be skipped and:

  • The ordering-dependent Tasks will be executed
  • The resource-dependent Tasks (and their dependencies) will be skipped because of missing Results from the skipped parent Task. When we add support for default Results, then the resource-dependent Tasks may be executed if the default Results from the skipped parent Task are specified. In addition, if a resource-dependent Task needs a file from a guarded parent Task in a shared Workspace, make sure to handle the execution of the child Task in case the expected file is missing from the Workspace because the guarded parent Task is skipped.

On the other hand, the rest of the Pipeline will continue executing.

                                     tests
                                       |
                                       v
                                 manual-approval
                                 |            |
                                 v        (approver)
                            build-image       |
                                |             v
                                v          slack-msg
                            deploy-image

Taking the use case above, a user who wants to guard manual-approval only can design the Pipeline as such:

tasks:
...
- name: manual-approval
  runAfter:
    - tests
  when:
    - input: $(params.git-action)
      operator: in
      values:
        - merge
  taskRef:
    name: manual-approval

- name: build-image
  runAfter:
    - manual-approval
  taskRef:
    name: build-image

- name: deploy-image
  runAfter:
    - build-image
  taskRef:
    name: deploy-image

- name: slack-msg
  params:
    - name: approver
      value: $(tasks.manual-approval.results.approver)
  taskRef:
    name: slack-msg

If manual-approval is skipped, execution of its dependent Tasks (slack-msg, build-image and deploy-image) would be unblocked regardless:

  • build-image and deploy-image should be executed successfully
  • slack-msg will be skipped because it is missing the approver Result from manual-approval
    • dependents of slack-msg would have been skipped too if it had any of them
    • if manual-approval specifies a default approver Result, such as “None”, then slack-msg would be executed (supporting default Results is in progress)

Configuring the failure timeout

You can use the Timeout field in the Task spec within the Pipeline to set the timeout of the TaskRun that executes that Task within the PipelineRun that executes your Pipeline. The Timeout value is a duration conforming to Go’s ParseDuration format. For example, valid values are 1h30m, 1h, 1m, and 60s.

Note: If you do not specify a Timeout value, Tekton instead honors the timeout for the PipelineRun.

In the example below, the build-the-image Task is configured to time out after 90 seconds:

spec:
  tasks:
    - name: build-the-image
      taskRef:
        name: build-push
      timeout: "0h1m30s"

Using variable substitution

Tekton provides variables to inject values into the contents of certain fields. The values you can inject come from a range of sources including other fields in the Pipeline, context-sensitive information that Tekton provides, and runtime information received from a PipelineRun.

The mechanism of variable substitution is quite simple - string replacement is performed by the Tekton Controller when a PipelineRun is executed.

See the complete list of variable substitutions for Pipelines and the list of fields that accept substitutions.

For an end-to-end example, see using context variables.

Using the retries and retry-count variable substitutions

Tekton supports variable substitution for the retries parameter of PipelineTask. Variables like context.pipelineTask.retries and context.task.retry-count can be added to the parameters of a PipelineTask. context.pipelineTask.retries will be replaced by retries of the PipelineTask, while context.task.retry-count will be replaced by current retry number of the PipelineTask.

params:
- name: pipelineTask-retries
  value: "$(context.pipelineTask.retries)"
taskSpec:
  params:
  - name: pipelineTask-retries
  steps:
  - image: ubuntu
    name: print-if-retries-exhausted
    script: |
      if [ "$(context.task.retry-count)" == "$(params.pipelineTask-retries)" ]
      then
        echo "This is the last retry."
      fi
      exit 1      

Note: Every PipelineTask can only access its own retries and retry-count. These values aren’t accessible for other PipelineTasks.

Using Results

Tasks can emit Results when they execute. A Pipeline can use these Results for two different purposes:

  1. A Pipeline can pass the Result of a Task into the Parameters or when expressions of another.
  2. A Pipeline can itself emit Results and include data from the Results of its Tasks.

Passing one Task’s Results into the Parameters or when expressions of another

Sharing Results between Tasks in a Pipeline happens via variable substitution - one Task emits a Result and another receives it as a Parameter with a variable such as $(tasks.<task-name>.results.<result-name>). Array Results is supported as alpha feature and can be referred as $(tasks.<task-name>.results.<result-name>[*]). Array indexing can be rererred as $(tasks.<task-name>.results.<result-name>[i]) where i is the index. Object Results is supported as alpha feature and can be referred as $(tasks.<task-name>.results.<result-name>[*]), object elements can be referred as $(tasks.<task-name>.results.<result-name>.key).

Note: Whole Array and Object Results cannot be referred in script and args. Note: Matrix does not support object and array results.

When one Task receives the Results of another, there is a dependency created between those two Tasks. In order for the receiving Task to get data from another Task's Result, the Task producing the Result must run first. Tekton enforces this Task ordering by ensuring that the Task emitting the Result executes before any Task that uses it.

In the snippet below, a param is provided its value from the commit Result emitted by the checkout-source Task. Tekton will make sure that the checkout-source Task runs before this one.

params:
  - name: foo
    value: "$(tasks.checkout-source.results.commit)"
  - name: array-params
    value: "$(tasks.checkout-source.results.array-results[*])"
  - name: array-indexing-params
    value: "$(tasks.checkout-source.results.array-results[1])"
  - name: object-params
    value: "$(tasks.checkout-source.results.object-results[*])"
  - name: object-element-params
    value: "$(tasks.checkout-source.results.object-results.objectkey)"

Note: If checkout-source exits successfully without initializing commit Result, the receiving Task fails and causes the Pipeline to fail with InvalidTaskResultReference:

unable to find result referenced by param 'foo' in 'task';: Could not find result with name 'commit' for task run 'checkout-source'

In the snippet below, a when expression is provided its value from the exists Result emitted by the check-file Task. Tekton will make sure that the check-file Task runs before this one.

when:
  - input: "$(tasks.check-file.results.exists)"
    operator: in
    values: ["yes"]

For an end-to-end example, see Task Results in a PipelineRun.

Note that when expressions are whitespace-sensitive. In particular, when producing results intended for inputs to when expressions that may include newlines at their close (e.g. cat, jq), you may wish to truncate them.

taskSpec:
  params:
  - name: jsonQuery-check
  steps:
  - image: ubuntu
    name: store-name-in-results
    script: |
            curl -s https://my-json-server.typicode.com/typicode/demo/profile | jq -r .name | tr -d '\n' | tee $(results.name.path)

Emitting Results from a Pipeline

A Pipeline can emit Results of its own for a variety of reasons - an external system may need to read them when the Pipeline is complete, they might summarise the most important Results from the Pipeline's Tasks, or they might simply be used to expose non-critical messages generated during the execution of the Pipeline.

A Pipeline's Results can be composed of one or many Task Results emitted during the course of the Pipeline's execution. A Pipeline Result can refer to its Tasks' Results using a variable of the form $(tasks.<task-name>.results.<result-name>).

After a Pipeline has executed the PipelineRun will be populated with the Results emitted by the Pipeline. These will be written to the PipelineRun's status.pipelineResults field.

In the example below, the Pipeline specifies a results entry with the name sum that references the outputValue Result emitted by the calculate-sum Task.

results:
  - name: sum
    description: the sum of all three operands
    value: $(tasks.calculate-sum.results.outputValue)

For an end-to-end example, see Results in a PipelineRun.

Array and object results is supported as alpha feature, see Array Results in a PipelineRun.

    results:
      - name: array-results
        type: array
        description: whole array
        value: $(tasks.task1.results.array-results[*])
      - name: array-indexing-results
        type: string
        description: array element
        value: $(tasks.task1.results.array-results[1])
      - name: object-results
        type: object
        description: whole object
        value: $(tasks.task2.results.object-results[*])
      - name: object-element
        type: string
        description: object element
        value: $(tasks.task2.results.object-results.foo)

A Pipeline Result is not emitted if any of the following are true:

  • A PipelineTask referenced by the Pipeline Result failed. The PipelineRun will also have failed.
  • A PipelineTask referenced by the Pipeline Result was skipped.
  • A PipelineTask referenced by the Pipeline Result didn’t emit the referenced Task Result. This should be considered a bug in the Task and may fail a PipelineTask in future.
  • The Pipeline Result uses a variable that doesn’t point to an actual PipelineTask. This will result in an InvalidTaskResultReference validation error during PipelineRun execution.
  • The Pipeline Result uses a variable that doesn’t point to an actual result in a PipelineTask. This will cause an InvalidTaskResultReference validation error during PipelineRun execution.

Note: Since a Pipeline Result can contain references to multiple Task Results, if any of those Task Result references are invalid the entire Pipeline Result is not emitted. Note: If a PipelineTask referenced by the Pipeline Result was skipped, the Pipeline Result will not be emitted and the PipelineRun will not fail due to a missing result.

Configuring the Task execution order

You can connect Tasks in a Pipeline so that they execute in a Directed Acyclic Graph (DAG). Each Task in the Pipeline becomes a node on the graph that can be connected with an edge so that one will run before another and the execution of the Pipeline progresses to completion without getting stuck in an infinite loop.

This is done using:

  • resource dependencies:

  • ordering dependencies:

    • runAfter clauses on the corresponding Tasks

For example, the Pipeline defined as follows

tasks:
- name: lint-repo
  taskRef:
    name: pylint
  resources:
    inputs:
      - name: workspace
        resource: my-repo
- name: test-app
  taskRef:
    name: make-test
  resources:
    inputs:
      - name: workspace
        resource: my-repo
- name: build-app
  taskRef:
    name: kaniko-build-app
  runAfter:
    - test-app
  resources:
    inputs:
      - name: workspace
        resource: my-repo
    outputs:
      - name: image
        resource: my-app-image
- name: build-frontend
  taskRef:
    name: kaniko-build-frontend
  runAfter:
    - test-app
  resources:
    inputs:
      - name: workspace
        resource: my-repo
    outputs:
      - name: image
        resource: my-frontend-image
- name: deploy-all
  taskRef:
    name: deploy-kubectl
  resources:
    inputs:
      - name: my-app-image
        resource: my-app-image
        from:
          - build-app
      - name: my-frontend-image
        resource: my-frontend-image
        from:
          - build-frontend

executes according to the following graph:

        |            |
        v            v
     test-app    lint-repo
    /        \
   v          v
build-app  build-frontend
   \          /
    v        v
    deploy-all

In particular:

  1. The lint-repo and test-app Tasks have no from or runAfter clauses and start executing simultaneously.
  2. Once test-app completes, both build-app and build-frontend start executing simultaneously since they both runAfter the test-app Task.
  3. The deploy-all Task executes once both build-app and build-frontend complete, since it ingests PipelineResources from both.
  4. The entire Pipeline completes execution once both lint-repo and deploy-all complete execution.

⚠️ PipelineResources are deprecated.

Consider using replacement features instead. Read more in documentation and TEP-0074.

Adding a description

The description field is an optional field and can be used to provide description of the Pipeline.

Adding Finally to the Pipeline

You can specify a list of one or more final tasks under finally section. finally tasks are guaranteed to be executed in parallel after all PipelineTasks under tasks have completed regardless of success or error. finally tasks are very similar to PipelineTasks under tasks section and follow the same syntax. Each finally task must have a valid name and a taskRef or taskSpec. For example:

spec:
  tasks:
    - name: tests
      taskRef:
        name: integration-test
  finally:
    - name: cleanup-test
      taskRef:
        name: cleanup

Specifying Workspaces in finally tasks

finally tasks can specify workspaces which PipelineTasks might have utilized e.g. a mount point for credentials held in Secrets. To support that requirement, you can specify one or more Workspaces in the workspaces field for the finally tasks similar to tasks.

spec:
  workspaces:
    - name: shared-workspace
  tasks:
    - name: clone-app-source
      taskRef:
        name: clone-app-repo-to-workspace
      workspaces:
        - name: shared-workspace
          workspace: shared-workspace
  finally:
    - name: cleanup-workspace
      taskRef:
        name: cleanup-workspace
      workspaces:
        - name: shared-workspace
          workspace: shared-workspace

Specifying Parameters in finally tasks

Similar to tasks, you can specify Parameters in finally tasks:

spec:
  tasks:
    - name: tests
      taskRef:
        name: integration-test
  finally:
    - name: report-results
      taskRef:
        name: report-results
      params:
        - name: url
          value: "someURL"

Specifying matrix in finally tasks

🌱 Matrix is an alpha feature. The enable-api-fields feature flag must be set to "alpha" to specify Matrix in a PipelineTask.

⚠️ This feature is in a preview mode. It is still in a very early stage of development and is not yet fully functional.

Similar to tasks, you can also provide Parameters through matrix in finally tasks:

spec:
  tasks:
    - name: tests
      taskRef:
        name: integration-test
  finally:
    - name: report-results
      taskRef:
        name: report-results
      params:
        - name: url
          value: "someURL"
      matrix:
        params:
        - name: slack-channel
          value:
          - "foo"
          - "bar"

For further information, read Matrix.

Consuming Task execution results in finally

finally tasks can be configured to consume Results of PipelineTask from the tasks section:

spec:
  tasks:
    - name: clone-app-repo
      taskRef:
        name: git-clone
  finally:
    - name: discover-git-commit
      params:
        - name: commit
          value: $(tasks.clone-app-repo.results.commit)

Note: The scheduling of such finally task does not change, it will still be executed in parallel with other finally tasks after all non-finally tasks are done.

The controller resolves task results before executing the finally task discover-git-commit. If the task clone-app-repo failed or skipped with when expression resulting in uninitialized task result commit, the finally Task discover-git-commit will be included in the list of skippedTasks and continues executing rest of the finally tasks. The pipeline exits with completion instead of success if a finally task is added to the list of skippedTasks.

Consuming Pipeline result with finally

finally tasks can emit Results and these results emitted from the finally tasks can be configured in the Pipeline Results. References of Results from finally will follow the same naming conventions as referencing Results from tasks: $(finally.<finally-pipelinetask-name>.result.<result-name>).

results:
  - name: comment-count-validate
    value: $(finally.check-count.results.comment-count-validate)
finally:
  - name: check-count
    taskRef:
      name: example-task-name

In this example, pipelineResults in status will show the name-value pair for the result comment-count-validate which is produced in the Task example-task-name.

PipelineRun Status with finally

With finally, PipelineRun status is calculated based on PipelineTasks under tasks section and finally tasks.

Without finally:

PipelineTasks under tasks PipelineRun status Reason
all PipelineTasks successful true Succeeded
one or more PipelineTasks skipped and rest successful true Completed
single failure of PipelineTask false failed

With finally:

PipelineTasks under tasks finally tasks PipelineRun status Reason
all PipelineTask successful all finally tasks successful true Succeeded
all PipelineTask successful one or more failure of finally tasks false Failed
one or more PipelineTask skipped and rest successful all finally tasks successful true Completed
one or more PipelineTask skipped and rest successful one or more failure of finally tasks false Failed
single failure of PipelineTask all finally tasks successful false failed
single failure of PipelineTask one or more failure of finally tasks false failed

Overall, PipelineRun state transitioning is explained below for respective scenarios:

  • All PipelineTask and finally tasks are successful: Started -> Running -> Succeeded
  • At least one PipelineTask skipped and rest successful: Started -> Running -> Completed
  • One PipelineTask failed / one or more finally tasks failed: Started -> Running -> Failed

Please refer to the table under Monitoring Execution Status to learn about what kind of events are triggered based on the Pipelinerun status.

Using Execution Status of pipelineTask

A pipeline can check the status of a specific pipelineTask from the tasks section in finally through the task parameters:

finally:
  - name: finaltask
    params:
      - name: task1Status
        value: "$(tasks.task1.status)"
    taskSpec:
      params:
        - name: task1Status
      steps:
        - image: ubuntu
          name: print-task-status
          script: |
            if [ $(params.task1Status) == "Failed" ]
            then
              echo "Task1 has failed, continue processing the failure"
            fi            

This kind of variable can have any one of the values from the following table:

Status Description
Succeeded taskRun for the pipelineTask completed successfully
Failed taskRun for the pipelineTask completed with a failure or cancelled by the user
None the pipelineTask has been skipped or no execution information available for the pipelineTask

For an end-to-end example, see status in a PipelineRun.

Using Aggregate Execution Status of All Tasks

A pipeline can check an aggregate status of all the tasks section in finally through the task parameters:

finally:
  - name: finaltask
    params:
      - name: aggregateTasksStatus
        value: "$(tasks.status)"
    taskSpec:
      params:
        - name: aggregateTasksStatus
      steps:
        - image: ubuntu
          name: check-task-status
          script: |
            if [ $(params.aggregateTasksStatus) == "Failed" ]
            then
              echo "Looks like one or more tasks returned failure, continue processing the failure"
            fi            

This kind of variable can have any one of the values from the following table:

Status Description
Succeeded all tasks have succeeded
Failed one ore more tasks failed
Completed all tasks completed successfully including one or more skipped tasks
None no aggregate execution status available (i.e. none of the above), one or more tasks could be pending/running/cancelled/timedout

For an end-to-end example, see $(tasks.status) usage in a Pipeline.

Guard finally Task execution using when expressions

Similar to Tasks, finally Tasks can be guarded using when expressions that operate on static inputs or variables. Like in Tasks, when expressions in finally Tasks can operate on Parameters and Results. Unlike in Tasks, when expressions in finally tasks can also operate on the Execution Status of Tasks.

when expressions using Parameters in finally Tasks

when expressions in finally Tasks can utilize Parameters as demonstrated using golang-build and send-to-channel-slack Catalog Tasks:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  generateName: pipelinerun-
spec:
  pipelineSpec:
    params:
      - name: enable-notifications
        type: string
        description: a boolean indicating whether the notifications should be sent
    tasks:
      - name: golang-build
        taskRef:
          name: golang-build
      # […]
    finally:
      - name: notify-build-failure # executed only when build task fails and notifications are enabled
        when:
          - input: $(tasks.golang-build.status)
            operator: in
            values: ["Failed"]
          - input: $(params.enable-notifications)
            operator: in
            values: ["true"]
        taskRef:
          name: send-to-slack-channel
      # […]
  params:
    - name: enable-notifications
      value: true

when expressions using Results in finally ‘Tasks`

when expressions in finally tasks can utilize Results, as demonstrated using git-clone and github-add-comment Catalog Tasks:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  generateName: pipelinerun-
spec:
  pipelineSpec:
    tasks:
      - name: git-clone
        taskRef:
          name: git-clone
      - name: go-build
      # […]
    finally:
      - name: notify-commit-sha # executed only when commit sha is not the expected sha
        when:
          - input: $(tasks.git-clone.results.commit)
            operator: notin
            values: [$(params.expected-sha)]
        taskRef:
          name: github-add-comment
      # […]
  params:
    - name: expected-sha
      value: 54dd3984affab47f3018852e61a1a6f9946ecfa

If the when expressions in a finally task use Results from a skipped or failed non-finally Tasks, then the finally task would also be skipped and be included in the list of Skipped Tasks in the Status, similarly to when using Results in other parts of the finally task.

when expressions using Execution Status of PipelineTask in finally tasks

when expressions in finally tasks can utilize Execution Status of PipelineTasks, as demonstrated using golang-build and send-to-channel-slack Catalog Tasks:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  generateName: pipelinerun-
spec:
  pipelineSpec:
    tasks:
      - name: golang-build
        taskRef:
          name: golang-build
      # […]
    finally:
      - name: notify-build-failure # executed only when build task fails
        when:
          - input: $(tasks.golang-build.status)
            operator: in
            values: ["Failed"]
        taskRef:
          name: send-to-slack-channel
      # […]

For an end-to-end example, see PipelineRun with when expressions.

when expressions using Aggregate Execution Status of Tasks in finally tasks

when expressions in finally tasks can utilize Aggregate Execution Status of Tasks as demonstrated:

finally:
  - name: notify-any-failure # executed only when one or more tasks fail
    when:
      - input: $(tasks.status)
        operator: in
        values: ["Failed"]
    taskRef:
      name: notify-failure

For an end-to-end example, see PipelineRun with when expressions.

Known Limitations

Specifying Resources in finally tasks

Similar to tasks, you can use PipelineResources as inputs and outputs for finally tasks in the Pipeline. The only difference here is, final tasks with an input resource can not have a from clause like a PipelineTask from tasks section. For example:

spec:
  tasks:
    - name: tests
      taskRef:
        Name: integration-test
      resources:
        inputs:
          - name: source
            resource: tektoncd-pipeline-repo
        outputs:
          - name: workspace
            resource: my-repo
  finally:
    - name: clear-workspace
      taskRef:
        Name: clear-workspace
      resources:
        inputs:
          - name: workspace
            resource: my-repo
            from: #invalid
              - tests

⚠️ PipelineResources are deprecated.

Consider using replacement features instead. Read more in documentation and TEP-0074.

Cannot configure the finally task execution order

It’s not possible to configure or modify the execution order of the finally tasks. Unlike Tasks in a Pipeline, all finally tasks run simultaneously and start executing once all PipelineTasks under tasks have settled which means no runAfter can be specified in finally tasks.

Using Custom Tasks

Custom Tasks have been promoted from v1alpha1 to v1beta1. Starting from v0.43.0, Pipeline Controller is able to create either v1alpha1 or v1beta1 Custom Task gated by a feature flag custom-task-version, defaulting to v1beta1. You can set custom-task-version to v1alpha1 or v1beta1 to control which version to create.

We’ll remove the flag custom-task-version and the entire alpha Custom Task in release v0.47.0 (tracked in the issue #5837). See the migration doc for details.

Custom Tasks can implement behavior that doesn’t correspond directly to running a workload in a Pod on the cluster. For example, a custom task might execute some operation outside of the cluster and wait for its execution to complete.

A PipelineRun starts a custom task by creating a Run/CustomRun instead of a TaskRun. In order for a custom task to execute, there must be a custom task controller running on the cluster that is responsible for watching and updating Run/CustomRuns which reference their type. If no such controller is running, those Run/CustomRuns will never complete and Pipelines using them will time out.

Specifying the target Custom Task

To specify the custom task type you want to execute, the taskRef field must include the custom task’s apiVersion and kind as shown below:

spec:
  tasks:
    - name: run-custom-task
      taskRef:
        apiVersion: example.dev/v1alpha1
        kind: Example

This creates a Run/CustomRun of a custom task of type Example in the example.dev API group with the version v1alpha1.

You can also specify the name of a custom task resource object previously defined in the cluster.

spec:
  tasks:
    - name: run-custom-task
      taskRef:
        apiVersion: example.dev/v1alpha1
        kind: Example
        name: myexample

If the taskRef specifies a name, the custom task controller should look up the Example resource with that name and use that object to configure the execution.

If the taskRef does not specify a name, the custom task controller might support some default behavior for executing unnamed tasks.

Specifying a Custom Task Spec in-line (or embedded)

For v1alpha1.Run

spec:
  tasks:
    - name: run-custom-task
      taskSpec:
        apiVersion: example.dev/v1alpha1
        kind: Example
          spec:
            field1: value1
            field2: value2

For v1beta1.CustomRun

spec:
  tasks:
    - name: run-custom-task
      taskSpec:
        apiVersion: example.dev/v1alpha1
        kind: Example
          customSpec:
            field1: value1
            field2: value2

If the custom task controller supports the in-line or embedded task spec, this will create a Run/CustomRun of a custom task of type Example in the example.dev API group with the version v1alpha1.

If the taskSpec is not supported, the custom task controller should produce proper validation errors.

Please take a look at the developer guide for custom controllers supporting taskSpec:

taskSpec support for pipelineRun was designed and discussed in TEP-0061

Specifying parameters

If a custom task supports parameters, you can use the params field to specify their values:

spec:
  tasks:
    - name: run-custom-task
      taskRef:
        apiVersion: example.dev/v1alpha1
        kind: Example
        name: myexample
      params:
        - name: foo
          value: bah

Specifying matrix

🌱 Matrix is an alpha feature. The enable-api-fields feature flag must be set to "alpha" to specify Matrix in a PipelineTask.

⚠️ This feature is in a preview mode. It is still in a very early stage of development and is not yet fully functional.

If a custom task supports parameters, you can use the matrix field to specify their values, if you want to fan:

spec:
  tasks:
    - name: run-custom-task
      taskRef:
        apiVersion: example.dev/v1alpha1
        kind: Example
        name: myexample
      params:
        - name: foo
          value: bah
      matrix:
        params:
        - name: bar
          value:
            - qux
            - thud

For further information, read Matrix.

Specifying workspaces

If the custom task supports it, you can provide Workspaces to share data with the custom task.

spec:
  tasks:
    - name: run-custom-task
      taskRef:
        apiVersion: example.dev/v1alpha1
        kind: Example
        name: myexample
      workspaces:
        - name: my-workspace

Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them.

Using Results

If the custom task produces results, you can reference them in a Pipeline using the normal syntax, $(tasks.<task-name>.results.<result-name>).

Specifying Timeout

v1alpha1.Run

If the custom task supports it as we recommended, you can provide timeout to specify the maximum running time of a CustomRun (including all retry attempts or other operations).

v1beta1.CustomRun

If the custom task supports it as we recommended, you can provide timeout to specify the maximum running time of one CustomRun execution.

spec:
  tasks:
    - name: run-custom-task
      timeout: 2s
      taskRef:
        apiVersion: example.dev/v1alpha1
        kind: Example
        name: myexample

Consult the documentation of the custom task that you are using to determine whether it supports Timeout.

Specifying Retries

If the custom task supports it, you can provide retries to specify how many times you want to retry the custom task.

spec:
  tasks:
    - name: run-custom-task
      retries: 2
      taskRef:
        apiVersion: example.dev/v1alpha1
        kind: Example
        name: myexample

Consult the documentation of the custom task that you are using to determine whether it supports Retries.

Limitations

Pipelines do not support the following items with custom tasks:

  • Pipeline Resources

Known Custom Tasks

We try to list as many known Custom Tasks as possible here so that users can easily find what they want. Please feel free to share the Custom Task you implemented in this table.

v1beta1.CustomRun

Custom Task Description
Wait Task Beta Waits a given amount of time before succeeding, specified by an input parameter named duration. Support timeout and retries.

v1alpha1.Run

Custom Task Description
Pipeline Loops Runs a Pipeline in a loop with varying Parameter values.
Common Expression Language Provides Common Expression Language support in Tekton Pipelines.
Wait Waits a given amount of time, specified by a Parameter named “duration”, before succeeding.
Approvals Pauses the execution of PipelineRuns and waits for manual approvals.
Pipelines in Pipelines Defines and executes a Pipeline in a Pipeline.
Task Group Groups Tasks together as a Task.
Pipeline in a Pod Runs Pipeline in a Pod.

Code examples

For a better understanding of Pipelines, study our code examples.


Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.