Nodes
Nodes are the building blocks of every workflow. Each node performs a specific task -- receiving data from upstream nodes, processing it, and passing results downstream.
Node Categories
| Category | Nodes | Description |
|---|---|---|
| Triggers | 4 | Start a workflow -- every workflow begins with exactly one trigger |
| Core | 7 | General-purpose data processing, routing, HTTP, and conversion |
| DataStore | 4 | Query, insert, update, and delete records in the built-in DataStore |
| Database | 12 | Interact with external databases (MySQL, PostgreSQL, Supabase) |
| Redis | 1 | Read and write data in a Redis connection |
| File Transfer | 5 | Read, write, list, rename, and delete files over SFTP |
How Data Flows
- A trigger fires and produces an initial data payload.
- Each downstream node receives data from the nodes connected to its input handles.
- The node processes the data according to its configuration and produces output.
- Output flows to the next connected nodes until the workflow completes.
Nodes can produce single items (e.g., an HTTP response) or lists of items (e.g., database query results). When a node receives a list, it typically processes every item in that list.
Execution Mode
Most nodes support two execution modes that control how data is handed to downstream nodes:
| Mode | Behavior |
|---|---|
| Gated (default) | The node collects all output items, then sends them downstream as a batch once the node finishes. |
| Piped | Items are streamed to downstream nodes as soon as they are produced, enabling parallel processing. |
TIP
Use Piped mode when you have large datasets and want downstream nodes to start processing before the current node finishes. Use Gated mode when downstream nodes need the complete dataset (e.g., sorting or aggregation).
Common Node Settings
Every node shares a few standard settings beyond its specific configuration:
- Fallback output -- An optional output handle that receives items when the node's primary logic does not match (used by the Switch node, for example).
- Failed output -- Routes items that cause errors to a separate branch instead of failing the entire workflow.
Connection Requirements
Integration nodes (Database, Redis, File Transfer) require a connection to be configured before use. Connections store encrypted credentials for external services and can be reused across multiple nodes and workflows.
See the Connections section for details on setting up each connection type.