Workflow Import
Design and behavior of the workflow import feature in Eddy
Workflow Import Functionality
Source: This page is based on docs/WORKFLOW_IMPORT.md
Last Updated: January 8, 2026
This document outlines the design and behavior of the workflow import feature in Eddy. This feature allows users to create a new workflow from a structured JSON file, typically one generated by the Workflow Export feature. It is a key component for migrating workflows, sharing templates, and programmatic workflow creation.
Overview
The import process takes a valid workflow export JSON file and reconstructs the entire workflow and its related components within a specified workspace. The process is designed for robustness and data integrity:
- New Identity: All imported entities (workflow, pages, blocks, etc.) are assigned new, unique IDs
- Relationship Mapping: All internal foreign key relationships are re-mapped to use the new IDs
- Transactional Integrity: The entire import operation is wrapped in a single database transaction. If any step fails, the transaction is rolled back, and no changes are made to the database
The core logic is handled by the importWorkflow service, which is exposed via a secure API endpoint.
API Endpoint
Route: POST /api/workflows/import
Authorization: The requesting user must be a member of the target group_id (workspace). A future enhancement may introduce more granular, role-based checks (e.g., only 'admin' or 'creator' roles can import).
Request Body:
{
"group_id": "string (uuid)",
"workflow_data": "object (a valid Eddy Workflow Export JSON object)"
}Response:
- On success, it returns a
201 Createdstatus with the JSON object of the newly created workflow - On failure (e.g., invalid data, permissions error), it returns an appropriate
4xxor5xxerror code
Imported Data Structure
The importer expects a JSON object that strictly adheres to the EddyWorkflowExportT structure, which is validated using the EddyWorkflowExportZ Zod schema. This is the same structure produced by the export functionality.
// High-level structure from app/types/export.ts
type EddyWorkflowExportT = {
version: number // Currently 1
exportedAt: Date
sourceWorkflowId: string
// Core Workflow Components
workflow: WorkflowDatabaseT
pages: PageDatabaseT[]
sections: SectionDatabaseT[]
blocks: BlockDatabaseT[]
blockOptions: BlockOptionT[]
pageTransitions: PageTransitionT[]
workflowRoles: WorkflowRoleT[]
stageRoleAssignments: StageRoleAssignmentBaseDatabaseT[]
// Optional Components
sheets?: SheetT[]
columns?: ColumnT[]
}Backward Compatibility
To maintain backward compatibility, the importer's validation schema is designed to gracefully handle export files that predate the options field. If the options field is missing from the import data, the parser automatically adds a default empty object ({}) during the validation and transformation step. This ensures that the import service always operates on a consistent and modern data structure.
This is handled by the WorkflowPortableZ schema in app/types/workflow.ts, which uses Zod's .transform() method to normalize legacy import files.
Core Logic & Data Integrity
The importWorkflow service (app/services/workflows/import.ts) executes a series of steps to ensure a safe and accurate import.
1. Transactional Operation
The entire process is wrapped in a knex.transaction. This guarantees that the import is an "all-or-nothing" atomic operation. If an error occurs at any stage, all previously created database records within the transaction are rolled back. This is verified in the integration tests.
2. Schema Validation
The first action is to parse the incoming workflow_data with the EddyWorkflowExportZ schema. This provides a critical layer of defense, ensuring the data is structurally sound, contains all required fields, and adheres to integrity rules (e.g., it contains no archived entities) before any database writes are attempted.
3. Sheet & Column Import (Conditional)
The service's primary goal here is to ensure the internal consistency of the import data. It follows one of two paths:
Path A: Sheets are Included
- If the
sheetsarray is present and non-empty, theimportSheetsAndColumnshelper is invoked - This helper creates new
sheetandcolumnrecords, assigning ownership to the importing user and workspace - It generates
sheetIdMapandcolumnIdMapto map original IDs to new database IDs - Integrity Check: The helper validates that every
blockin the import data that references asheet_idorcolumn_idpoints to a sheet or column that is also part of the import. If a reference points to an ID that doesn't exist in the generated maps, the transaction fails
Path B: Sheets are NOT Included
- If the
sheetsarray is not present or is empty, a different validation is performed to prevent orphaned references - Integrity Check: To prevent orphaned references, the service explicitly validates that no
blocksin the data containsheet_idorcolumn_idreferences. ThecopySectionshelper performs the same validation for sections: the transaction will fail if asectioncontains asheet_id, as no corresponding sheet is being imported
4. Entity Creation and ID Remapping
The service reuses robust helper functions from the workflow copy feature to create the new entities:
- A new
workflowrecord is created pages,sections,blocks,blockOptions,workflowRoles, and other related entities are created in sequence- As each type of entity is created, a map is generated (e.g.,
pageIdMap) that links the old ID to the new one - These maps are passed to subsequent functions to correctly set foreign keys. For instance, when creating
sections, thepageIdMapis used to ensure each section'spage_idpoints to the correct, newly created page
5. Rule Remapping
For entities that contain complex rule structures in JSON fields (like page_transitions and sections), a final remapping step is performed:
- Functions like
updatePageTransitionRulestraverse these JSON structures - They use the generated ID maps (
columnIdMap,blockOptionIdMap, etc.) to find and replace all old ID references with the new IDs, ensuring that all conditional logic within the workflow remains intact and functional
6. Ownership Assignment
All newly created entities are correctly associated with the target groupId and the user performing the import.
Security Considerations
Cross-Site Scripting (XSS): The import file contains numerous user-generated string fields (e.g., block.content, page.title, section.name). To prevent potential XSS attacks from a malicious import file, this content should be sanitized before being stored in the database or rendered on the client. The service currently contains a TODO comment to implement this sanitization.
Edge Cases and Behavior
The integration tests (app/__tests__/integration/workflowImport.test.ts) validate the importer's behavior in various scenarios:
- Data Consistency: The importer preserves all relevant data from the export file, including workflow descriptions, singleton status, and the order of entities like blocks and pages
- Empty Workflows: A workflow containing no pages or other components can be imported successfully
- Referential Integrity: The system is designed to fail loudly if the import data is internally inconsistent. For example, an import will be rejected if a block references a
column_idbut the corresponding column is missing from thecolumnsarray - Transaction Rollback: Tests confirm that if an error occurs mid-import (e.g., due to a data integrity violation), no partial data is left in the database
- Idempotency: While not strictly idempotent (running the same import twice will create two identical workflows), the process is predictable and isolated. Each import creates a completely new set of entities with no side effects on existing workflows
Use Cases
1. Workflow Migration
Import workflows from one environment to another (development → staging → production).
2. Workflow Templates
Import pre-built workflow templates to quickly set up common processes.
3. Backup Restoration
Restore workflows from backup files in case of data loss or corruption.
4. Workflow Sharing
Share workflows between teams or organizations by exporting and importing JSON files.