Workflows
Orchestrate actions into complete automation pipelines, from raw inputs to final results.
Why?
In mechanical engineering, a single part drawing is not enough to build a machine. You need an assembly drawing: a document that defines which parts go where, in what order they are assembled, and how they relate to each other. The assembly drawing turns a box of individual components into a functioning system. Without it, even the best machined parts sit idle on a workbench.
The same principle applies to computational workflows. Actions are your individual parts: a file converter, a simulation runner, a data uploader. But parts alone do not produce results. You need a plan that says: first, check out the model files; then, generate simulation inputs; next, run all simulations in parallel; finally, convert and upload the results.
Workflows in Pencilroads are your assembly drawings. They define the sequence, the dependencies, and the parameters for a complete automation pipeline.
Getting Started
Workflows live inside your project's .pencilroads/workflows/ directory. Each workflow is a YAML file that defines a name, optional triggers, environment variables, and a set of jobs.
Basic structure
Here is a minimal workflow that checks out project files and runs a simulation:
# .pencilroads/workflows/simulate.yml
name: Run simulation
jobs:
simulate:
name: Run analysis
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout
- name: Run solver
uses: my-org/fem-solver
with:
input: models/bridge.inp
output: results/bridge.outThe top-level name field identifies the workflow. The jobs block contains one or more jobs, each with a sequence of steps.
Environment variables
Define variables at the workflow level to share configuration across all jobs and steps:
name: Wind turbine analysis
env:
DRIVE_PATH: /drive
DLC_FILE: /drive/dlcs.txt
SIMS_DIR: /drive/sims
jobs:
pre:
name: Preprocessing
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout
- name: Generate inputs
uses: qblade/qblade
shell: bash
run: |
echo "Using DLC file: $DLC_FILE"
mkdir $SIMS_DIR
./generator -cli $SIMS_DIR dlc=$DLC_FILEEnvironment variables defined at the workflow level are available in every job. Individual steps can also define their own env block to add or override variables.
Jobs and Steps
A workflow is made of jobs, and each job is made of steps. Think of jobs as the major stages of your process (preprocessing, simulation, post-processing) and steps as the individual operations within each stage.
Job properties
Each job requires a name, a runtime environment, and a list of steps:
jobs:
analysis:
name: Structural analysis
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout
- name: Run FEM solver
uses: my-org/fem-solver
with:
model: models/tower.inpThe runs-on field specifies the compute environment. Currently supported value is ubuntu-latest
Step types
Steps can either reference an action or run inline commands:
steps:
# Reference an action
- name: Convert results
uses: qblade/converter
with:
input: results/sim.out
# Run inline commands
- name: List output files
shell: bash
run: |
echo "Listing results..."
ls -la results/
# Run a Python script inline
- name: Analyze data
shell: python
run: |
import json
import glob
files = sorted(glob.glob('results/*.parquet'))
print(f"Found {len(files)} result files")
with open('/tmp/manifest.txt', 'w') as f:
f.write(json.dumps(files))The shell field supports bash, sh, and python. When using uses, the step runs inside the action's container. When using run, the inline script executes in the specified shell.
Working directory
Steps can specify a working-directory to control where commands execute:
- name: Process results
shell: bash
working-directory: /drive/output
run: |
echo "Processing files in $(pwd)"
ls *.parquetUsing Actions
Steps reference actions with the uses keyword, following the org/project pattern. Pass inputs with the with block:
- name: Convert simulation output
uses: qblade/converter
with:
input: results/simulation.out
- name: Upload timeseries
uses: actions/timeseries-upload
with:
PARQUET_INPUT_FILE: results/output.parquet
PARQUET_TIME_NAME: "Time~[s]"
PARQUET_TIME_UNITS: "s"Public actions can be used by any project. Private actions can only be used by projects within the same organization. See the Actions documentation for details on versioning with tags.
Overriding entrypoint and args
For Docker-based actions, you can override the container's entrypoint and arguments:
- name: Convert output
uses: my-org/converter
with:
entrypoint:
- uv
- run
- python
args:
- main.py
input: results/sim.outJob Dependencies
Jobs run independently by default. Use the needs keyword to create dependencies between jobs, ensuring one completes before another begins:
jobs:
pre:
name: Preprocessing
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout
- name: Generate inputs
uses: qblade/qblade
shell: bash
run: ./generate-inputs.sh
simulate:
name: Run simulations
needs: pre
runs-on: ubuntu-latest
steps:
- name: Run solver
uses: qblade/qblade
shell: bash
run: ./run-solver.sh
post:
name: Post-processing
needs: simulate
runs-on: ubuntu-latest
steps:
- name: Generate report
uses: my-org/report-gen
with:
input_dir: results/In this example, simulate waits for pre to finish, and post waits for simulate. This creates a sequential pipeline: preprocessing, then simulation, then post-processing.
A job can depend on multiple jobs by passing an array:
post:
name: Aggregate results
needs:
- simulate-onshore
- simulate-offshore
runs-on: ubuntu-latest
steps: [...]Outputs
Jobs and steps can produce outputs that downstream jobs consume. This is how data flows between stages of your pipeline.
Step outputs
A step declares outputs that are written to a file. Other steps or jobs can reference these values:
- name: Get list of sim files
id: get-sims-list
shell: python
run: |
import os, glob, json
sim_files = sorted(glob.glob('/drive/sims/**/*.sim', recursive=True))
with open('/tmp/simlist.txt', 'w') as f:
f.write(json.dumps(sim_files))
print(f"Found {len(sim_files)} simulation files")
outputs:
files:
path: "/tmp/simlist.txt"The id field gives the step an identifier. The outputs block maps output names to file paths containing the output data.
Job outputs
To make step outputs available to other jobs, declare them at the job level:
jobs:
pre:
name: Preprocessing
runs-on: ubuntu-latest
outputs:
sims-list: ${{ steps.get-sims-list.outputs.files }}
steps:
- name: Get sim files
id: get-sims-list
# ... step that produces the output
run:
name: Run simulations
needs: pre
runs-on: ubuntu-latest
steps:
- name: Use the list
env:
FILES: ${{ needs.pre.outputs.sims-list }}
run: echo "Sim files: $FILES"The expression syntax ${{ steps.<id>.outputs.<name> }} references step outputs within the same job. Use ${{ needs.<job>.outputs.<name> }} to reference outputs from a dependency job.
Matrix Strategies
Matrix strategies are where workflows become truly powerful for engineering work. Instead of running one simulation at a time, you define a matrix of parameters and Pencilroads runs all combinations in parallel.
Consider a Design Load Case (DLC) campaign for a wind turbine. You need to simulate across multiple wind speeds, each producing its own result set. Without a matrix, you would write a separate job for each wind speed. With a matrix, you write the job once:
jobs:
pre:
name: Preprocessing
runs-on: ubuntu-latest
outputs:
sims-list: ${{ steps.find-sims.outputs.files }}
steps:
- name: Checkout
uses: actions/checkout
- name: Generate simulation files
uses: qblade/qblade
shell: bash
run: |
mkdir /drive/sims
./QBladeEE -cli /drive/sims dlc=/drive/dlcs.txt
- name: Find all sim files
id: find-sims
shell: python
run: |
import glob, json
sims = sorted(glob.glob('/drive/sims/**/*.sim', recursive=True))
with open('/tmp/simlist.txt', 'w') as f:
f.write(json.dumps(sims))
outputs:
files:
path: "/tmp/simlist.txt"
run:
name: Run simulations
needs: pre
runs-on: ubuntu-latest
strategy:
matrix:
sim-file: ${{ needs.pre.outputs.sims-list }}
steps:
- name: Run simulation
uses: qblade/qblade
shell: bash
env:
CURRENT_FILE: ${{ matrix.sim-file }}
run: ./QBladeEE -cli $CURRENT_FILE
The strategy.matrix block defines the parameter space. Each value in the matrix spawns an independent job instance that runs in parallel. In this example, if preprocessing produces six simulation files, six parallel jobs run simultaneously, each processing one file, converting the output, and uploading the timeseries.
Matrix values can come from a previous job's output (as shown above) or be defined inline:
strategy:
matrix:
wind-speed: ['5', '7', '9', '11', '13', '15']Triggers
The on block defines when a workflow runs automatically. Currently, workflows can be triggered by file commits:
on:
commit:
paths:
- 'sim/**' # Simulation template and controller parameters
- 'dlcs.txt' # Design load case definitions
- '*.so' # Controller shared librariesWhen you commit changes to any file matching these glob patterns, Pencilroads automatically triggers the workflow. This is useful for iterative design loops: modify your DLC definitions, commit the file, and a full simulation campaign runs without manual intervention.
Paths support standard glob wildcards: * matches any file in a directory, ** matches files recursively through subdirectories.
Workflows without an on block can still be triggered manually from the Pencilroads interface.
Artifacts and Results
Workflow outputs are stored as artifacts in cloud storage. The most common artifact type for engineering workflows is timeseries data stored in Parquet format. These files can contain thousands of sensor channels (forces, moments, displacements, angles) recorded across thousands of time steps.
The typical pipeline for simulation results is:
- Simulate -- Run the solver, producing raw output files
- Convert -- Transform raw output into Parquet format using a converter action
- Upload -- Store the Parquet file and register its metadata using the timeseries upload action
Once uploaded, timeseries data is available in the Plots module, where you can visualize and compare results across workflow runs.
Permissions
Jobs can declare fine-grained permissions to control what resources they can access:
jobs:
build:
name: Build and publish
runs-on: ubuntu-latest
permissions:
contents: read # Read project files
packages: write # Push container images
steps: [...]Available permission scopes are contents, packages, and issues. Each can be set to read, write, or none.
Complete Example
Here is a complete workflow for a wind turbine DLC campaign. It preprocesses simulation inputs, runs all simulations in parallel, converts the results, and uploads them for visualization:
NREL 5MW Onshore Demo ProjectConclusion
Workflows are the assembly drawings of Pencilroads. They turn individual actions into complete automation pipelines that run reliably, scale across parameter matrices, and produce traceable results.
With triggers, your workflows respond to file changes automatically. With matrix strategies, a single workflow definition can launch hundreds of parallel simulations. With outputs and artifacts, every result is stored, structured, and ready for analysis.
The results of your workflow runs—especially timeseries data—flow directly into the Plots module, where you can visualize sensor channels, compare runs, and build dashboards without exporting data manually.
Continue to the documentation to explore other topics, including how to visualize and analyze your workflow results with the Plots module.