GitHub
Reference

Workflows API

REST endpoints for workflows — CRUD, version management, run, cancel, retry, resume, executions, logs (SSE), artifacts, DLQ.

The Workflows API is the surface every workflow run reaches through — agent dispatches, CI pipelines, trigger-rule actions, chat sessions, scheduled jobs. Mounted at /api/v1/workflows. Implementation in internal/workflows/.

Workflow CRUD

Method · PathPurposePermission
GET /api/v1/workflowsList with filtersworkflows.read
POST /api/v1/workflowsCreateworkflows.create
GET /api/v1/workflows/{id}Read with active versionworkflows.read
PATCH /api/v1/workflows/{id}Update name/description/runtime_configworkflows.edit
DELETE /api/v1/workflows/{id}Soft-delete (archives all versions)workflows.edit

A workflow row is one logical pipeline; revisions are stored in workflow_version with active_version as the pointer to which one runs.

Versioning

Method · PathPurpose
GET /api/v1/workflows/{id}/versionsList all versions
GET /api/v1/workflows/{id}/versions/{version}Read a specific version’s graph + metadata
POST /api/v1/workflows/{id}/versions/{version}/restoreCreate a new draft from an old version’s graph
POST /api/v1/workflows/{id}/publishMark current draft as published, set as active_version
POST /api/v1/workflows/{id}/promoteSet an existing published version as active_version
GET /api/v1/workflows/{id}/exportExport the active version as YAML
POST /api/v1/workflows/importImport a YAML body — creates a new workflow

Only published versions can fire on triggers/schedules. Drafts only run via explicit POST /run. Note that import lives at the workflow group root, not under /{id} — it creates a new workflow rather than replacing one.

Running and controlling

Execution endpoints all require both the workflow {id} and the execution {execId} in the path — there’s no shortcut without the workflow context.

Method · PathPurpose
POST /api/v1/workflows/{id}/runStart an execution; body is {params: {...}} matching manual triggers’ inputs
GET /api/v1/workflows/executionsList all executions across workflows (filterable)
GET /api/v1/workflows/{id}/executions/{execId}Read execution detail with node states + cost
DELETE /api/v1/workflows/{id}/executionsBulk delete executions for a workflow
DELETE /api/v1/workflows/{id}/executions/{execId}Delete a single execution record
POST /api/v1/workflows/{id}/executions/{execId}/cancelCancel a running execution
POST /api/v1/workflows/{id}/executions/{execId}/retryRetry from the last failed node, keeping prior outputs
POST /api/v1/workflows/{id}/executions/{execId}/resumeResume a paused execution (HITL approval, etc.)
POST /api/v1/workflows/{id}/executions/{execId}/followupContinue an interactive workflow with a follow-up message
GET /api/v1/workflows/{id}/usageAggregate usage stats (token + compute) for this workflow

POST /run returns 202 Accepted immediately with the execution ID. Stream progress via:

curl -N "http://localhost:3000/api/v1/workflows/$WORKFLOW_ID/executions/$EXEC_ID/stream" \
  -H "Authorization: Bearer $PFAI_TOKEN"

The SSE stream (GET .../{execId}/stream) emits one event per node state change plus periodic heartbeats. Reconnect with ?from=<seq> to pick up where you left off.

pfai wait workflow-execution/{execId} blocks until terminal.

Artifacts

Method · PathPurpose
GET /api/v1/workflows/{id}/executions/{execId}/artifactsList artifact files
GET /api/v1/workflows/{id}/executions/{execId}/artifacts/{path}Download a specific artifact (path is wildcard — supports nested directories)

Artifacts are files written under /workspace (or declared paths) by the execution’s container; the reaper collects them on Release. Retained for 24 hours by default per internal/workflows/workspace.go DefaultRetentionHours. Configurable per workflow via runtime_config.retention_hours.

Dead letter queue

Nodes that exhaust their retries write to a DLQ (internal/workflows/dlq/).

Method · PathPurpose
GET /api/v1/workflows/dlqList failed nodes across all executions
GET /api/v1/workflows/dlq/{id}Read a single failure with full context
POST /api/v1/workflows/dlq/{id}/retryRe-enqueue the failed node
POST /api/v1/workflows/dlq/{id}/resolveMark the failure as resolved (no retry — operator confirms it’s been handled out-of-band)

Use the DLQ to triage operational issues before they propagate — e.g. a webhook target consistently 503ing, a rate-limited LLM provider, a flaky CLI tool.

Read shape

GET /api/v1/workflows/{id} returns:

{
  "id": "wf_a1b2c3",
  "name": "deploy-staging",
  "description": "Deploy main to staging on every merge",
  "type": "ci",
  "status": "published",
  "version": 5,
  "activeVersion": 5,
  "graph": { "nodes": [...], "edges": [...] },
  "variables": { "ENV": "staging" },
  "runtimeConfig": {
    "mode": "per_execution",
    "image": "dev-go",
    "cpu": "2",
    "memory": "1Gi"
  },
  "triggerConfig": [
    { "type": "event", "event": "pr.merged", "branches": ["main"] }
  ],
  "createdBy": "user_alice",
  "createdAt": "2026-05-01T00:00:00Z",
  "updatedAt": "2026-05-05T09:31:08Z"
}

graph follows the YAML schema from internal/workflows/loader.go — exported via /export and re-importable via /import.

Execution shape

GET /api/v1/workflows/executions/{execId} returns:

{
  "id": "exec_…",
  "workflowId": "wf_a1b2c3",
  "version": 5,
  "status": "succeeded",
  "trigger": { "type": "event", "eventType": "pr.merged", "subject": "pr/42" },
  "startedAt": "...", "endedAt": "...", "durationMs": 184523,
  "nodes": [
    { "nodeId": "checkout",   "status": "succeeded", "durationMs": 1240 },
    { "nodeId": "test",       "status": "succeeded", "durationMs": 92834 },
    { "nodeId": "deploy",     "status": "succeeded", "durationMs": 90449 }
  ],
  "cost": {
    "estimatedUsd": 0.12,
    "tokens": { "prompt": 0, "completion": 0 },
    "computeSeconds": 184
  }
}

Status values: pending, running, succeeded, failed, cancelled, waiting (HITL).

See also