runSh is a job that lets you run any shell script as part of your DevOps Assembly Line. It is one of the most versatile jobs in the arsenal and can be used to pretty much execute any DevOps activity that can be scripted. With a combination of INs like params, integration, gitRepo, etc., the vision of "Everything as Code" can be realized.

You should use this job type if you need the freedom that some of the pre-packaged jobs like deploy and manifest do not provide, or if they do not support the 3rd party end-point you want to integrate with. For example, pushing to Heroku is not yet natively supported through a managed job type, so you can write the scripts needed to do this and add it to your workflow as a job of type runSh.

You can also add cliConfig resources as inputs to this job. The relevant CLI tools will be preconfigured for your scripts to use. For a complete list of supported cliConfig integrations see here.

A new version is created every time this job is executed.

You can create a runSh job by adding it to shippable.yml and it executes on Shippable provided Dynamic Nodes or Custom Nodes.

YML Definition

  - name:             <string>
    type:             runSh
    triggerMode:      <parallel/serial>
      - NOTIFY:       <notification resource name>
      - IN:           <resource>
        switch:       off
      - IN:           <job>
      - IN:           <resource>
        versionName:  <name of the version you want to pin>
      - IN:           <resource>
        versionNumber:    <number of the version you want to pin>        
      - IN:           <gitRepo resource with buildOnPullRequest: true>
        showBuildStatus:  true       
      - IN:           <cliConfig with scope support>
          - scope     <scope that you want configured>
      - TASK:
        - script:     <any shell command>
        - script:     <any shell command>
      - OUT:          <resource>
      - OUT:          <resource>
        replicate:    <IN resource>
      - OUT:          <resource>
        overwrite:    true
      - script:       echo "SUCCESS"
      - script:       echo "FAILED"
      - NOTIFY:       <notification resource name>
      - script:       echo "CANCEL"
      - script:       pwd

A description of the job YML structure and the tags available is in the jobs section of the anatomy of shippable.yml page.

  • name -- Required, should be an easy to remember text string

  • type -- Required, is set to runSh

  • triggerMode -- Optional, can be parallel or serial. defaults to serial. When set to serial, if this job is triggered multiple times, the resulting builds will be processed one at a time. When set to parallel, the builds can run at the same time, up to the number of minions available to the subscription. Please note that this can result in unpredictable behavior with regard to the job's state information

  • on_start -- Optional, and both script and NOTIFY types can be used

  • steps -- is an object which contains specific instructions to run this job

    • IN -- Optional, any resource or job can be used here, with as many IN resources and jobs as you need. The switch, versionNumber, versionName and showBuildStatus options are supported, too. However, applyTo is not supported.

    • TASK -- Required, at least one script line needs to be present

      • - script: -- a line of bash script to be executed
    • OUT -- Optional, any resource can be used here and as many as you need
      • replicate -- Optional, any IN resource of same type can be used
      • overwrite -- Optional, default is false
  • on_success -- Optional, and both script and NOTIFY types can be used

  • on_failure -- Optional, and both script and NOTIFY types can be used

  • on_cancel -- Optional, and both script and NOTIFY types can be used

  • always -- Optional, and both script and NOTIFY types can be used

cliConfig special handling

If a resource of type cliConfig is added an IN into runSh, then the corresponding CLI is automatically configured and prepared for you to execute CLI specific commands. The job uses the subscription integration specified in cliConfig to determine which CLI tools to configure. E.g., if you use a cliConfig that uses Docker based integration, then we will automatically log you into the hub based on the configuration. This removes the need for you to having to do this manually.

Here is a list of the tools configured for each integration type:

Integration Type Configured Tools
AWS Keys AWS & Elastic Beanstalk
AWS Keys with ECR scope Docker
Azure Azure
Docker Registry Docker
Google Cloud Google Cloud & Kubectl
Google Cloud with GKE scope Google Cloud & Kubectl
Google Cloud with GCR scope Docker
JFrog JFrog
Kubernetes Kubectl
Quay Docker
For all Integrations above Packer & Terraform

Note: Google Cloud with gke scope is used to set the cluster name and region. For all other google cloud integration type(with no scopes or when scope is gcr) the cluster name and region will be ignored.

Default Environment Variables

In order to make it easier to write your scripts and work with IN and OUT resources, we have made several environment variables available for use within your TASK section of your runSh job. Visit the resource page for each type to get the list of environment variables that are set when a resource is included as an IN or OUT.

In addition, the job itself comes with its own default set of variables. This is the list for this job type:

Environment variable Description
JOB_NAME The name of the Job, given in the YML
JOB_TYPE The type of the Job. In this case runSh
BUILD_ID Internal Id of the current build thats executing
BUILD_NUMBER Sequentional number for the Job thats executing
BUILD_JOB_ID Internal ID of the currently running Job
BUILD_JOB_NUMBER Sequential number of the Job
SUBSCRIPTION_ID Shippable ID that represents git organization uniquely
JOB_PATH The path of the directory containing files critical for this job
JOB_STATE The location of the state directory for this job
JOB_PREVIOUS_STATE The location of the directory containing the state information from when the job last ran.

Shippable Utility Functions

To make it easy to use these environment variables, the platform provides a command line utility that can be used to work with these values.

How to use these utility functions is documented here.

Further Reading